AWS Server Size for Hosting [closed] - amazon-web-services

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am looking into purchasing server space with AWS to host what will eventually be over 50 websites. They will have many different ranges of traffic coming in. I would like to know if anyone has a recommendation on what size of server that would be able to handle this many sites.
Also, I was wondering if it's more cost effective/efficient to host an separate EC-2 instance for each site or too purchase a large umbrella server and host all sites on a single instance?
Thanks,

Co-locating services on single/multiple servers is a core architecting decision your firm should make. It will directly impact the performance, security and cost of your systems.
The benefit of having multiple services on the same Amazon EC2 instance is that they can share resources (RAM, CPU) so if one application is busy, it has access to more total resources. This is in contrast to running each one on a separate instance, where there is a smaller, finite quantity of resources. Think of it like car-pooling vs riding motorbikes.
Sharing resources means you can probably lower costs, since you'll need less total capacity.
From a security perspective, running on separate instances is much better because they are isolated from each other. You should also investigate network isolation to prevent potential breaches between instances on the same virtual network.
You should also look at the ability to host all of these services using a multi-tenant system as opposed to 50 completely separate systems. This has further benefits in terms of sharing resources and reducing costs. For example, Salesforce.com doesn't run a separate computer for each customer -- all the customers use the same systems, but security and data is kept separate at the application layer.
Bottom line: There are some major architectural decisions to make if you wish to roll-out secure, performant systems.

The short correct answer:
If those sites are only static(html, css and js). EC2 won't be necessary because you can use S3 and it will be more cheap and you won't have to worry about scaling.
But if those sites have a dynamic part like php, python and similar. Well it is a different story.

Related

How to reduce database latency on AWS [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 months ago.
Improve this question
I need to run my website from two different countries, but the database should be in any one country. How can I improve my database latency to be accessed from cross region.
It is best practice to always keep your database as close as possible to your application to ensure low-latency connections. It is a bad idea to separate them into different places around the world.
One idea:
Only run one application server (in the same location as your database), rather than two. Reduce application latency by using Amazon CloudFront to cache static content closer to your users.
If you really must separate the database from the application server:
Create a Read Replica of the database in the same region as your application. Note that this will be a read-only copy of the database, so your application will need to send updates to the master database in the other region. Fortunately, most database access is for Reads.
Alternatively, use a local cache server (eg Amazon ElastiCache) in your remote region. Consult the cache before going to the database. This is similar to the Read Replica scenario.
All of these options avoid the scenario where the database is separated from the application server.
Network latencies cannot be predicted. During peak hours it will definitely impact the application.
Consider creating read replicas , which is in the one country and keep the master in the other
If you can't push your database to multiple regions (by using read replicas for example), then you should consider using cloudfront in front of your website to allow for caching of requests in the various regions you care about when possible.
This won't technically improve the latency to the database, but in terms of your users perception of performance it may have the same end result by not requiring a round trip to the db server for every request.
If you make a lot of the same queries, PolyScale is a great alternative to a read-replica or creating your own cache. You can connect your database to PolyScale and then just update your application to make requests to the PolyScacle cache rather than the database directly. This eliminates the cost and complexity of a read replica and avoids the challenges of determining what to cache and TTL for a cache you write.
You can read this article about Database Read Replicas vs PolyScale Database Edge Caching.

AWS - EC2: Why would I need more than one instance? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Sorry if there is an obvious answer to this, but I'm currently in the process of setting up a new company, from where I'll be hosting client websites. Rather than use an external hosting company, I'd like to take full control of this through EC2.
Can multiple websites be hosted on a single instance, or will each new site require it's own instance?
Many thanks,
L
Multiple websites can be hosted on one instance, given that the instance is large enough to handle all the traffic from all the different websites.
Here are two main reasons you would use more than one EC2 instance:
Load: A single instance would not be able to handle the load. In this case you would want to start up multiple servers and place them behind a load balancer so that the load can be shared across them. You might also want to split out each site into separate clusters of EC2 servers to further distribute the load.
Fault tolerance: If you don't design your system with the expectation that an EC2 instance can and will disappear at some point, then you will eventually have a very unpleasant surprise. With your site running on multiple servers, spread out across multiple availability zones, if a server or even an entire AZ goes down your site will stay up.
You don't say if each client will require the same code base or if each client will have a different site, but modularity is also important.
What happens if one client requires a different AMI. Say one client requires some special is package for the server. You don't want to keep updating everybody's app every time you have a new client requirement.
So, multiple instances will allow you to scale each customer at different times and rates and will allow you to develop each solution without affecting each other.
Pricing will also be cheaper as you can use auto scaling to be very efficient about CPU used at any given time, compared to a big instance where you will need to estimate future use.
In short, the biggest value of the cloud is elasticity and modularity, so use that in your favor.
In addition to what Mark B said in his answer about load and fault tolerance, having multiple instances allows you have them in different regions of the world. This is helpful if you have requirements concerning the legality of where the data can be stored or more usually about the latency between the data and the user-application. Data stored in an EU region is going to have much less latency for EU users than data stored in a NA region.

What do we actually mean by large scale web application? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
What do we actually mean by large scale web application ?
What is the criteria to call it large scale web application
Is it number of code lines of a application or
Number of user per day of a web application. say 10K per day ?
What do we actually mean by large scale web applications? That depends who you ask.
If people have build smaller applications in the past and need to build a larger one now that handles more data and more traffic, they might call it a large scale web application. But if you then compare that site with ones like LinkedIn, Facebook, or Google then it will still be a small application.
People have different notions of what large is, what's large for some might be medium size for others and small for some others. But a large scale web application has characteristics such as these:
performance, being able to handle a large number (millions) of users/request or a large number (thousands) of transactions per second. Both have challenges depending on the type of app (CPU bound, IO bound or both).
scalability, the horizontal type, not only at the web server level but also at the database level. Depending on what you are doing a RDBMS might not cut it anymore so you have to walk the NoSQL path, sometimes using more than one product as NoSQL solutions tend to be specialized systems tackling a specific use cases as opposed to being a general purpose database as RDBMSs are. A lot of integration challenges arise from connecting heterogeneous solutions together and make them behave as a single application.
large scale applications are distributed, taking advantage of CDNs or running the app on servers geographically closer to the user. You can easily have hundreds or thousands of server nodes, with a large sys admin team having to manage the setup. If you don't have your own data centers you can run in the cloud.
besides a large sys admin team, you often have a large development team needing to optimize for performance and scalability, designers and front end developers working on providing a fluid user interface, with mobile support, etc;
having to deal with a large volume of data and with lots of data types. No longer handling just products, customers, orders, etc, but also clicks, page views/hits, events, logs, customer behavior tracking, etc. This is in relation to the previous NoSQL point, but also with this volume of data these apps tend to have a large back-office that offers all kinds of reports, graphs, administrative tools, etc to manage the app itself.
availability 24/7;
some other, miscellaneous keywords to throw into the mix like SOA architecture, microservices, data-warehouses, security, distributed caches, continuous deployment, etc.
etc.
These are some of the characteristics (I think) large web scale web apps have in common and it's important to think about these aspects up front and deal with all challenges that come from them from the beginning, when building the app.

Mashery vs WSO2 vs 3scale [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I would like to know the differences between Mashery, WSO2 and 3scale. Someone who has used API Managers before can give his opinion? What are advantages and disadvantages of each one
thanks
cheers
Not sure, but this question might end up flagged as off topic - vendor comparison, but anyway I'll jump in. I work at 3scale (full disclosure) but hopefully this is useful anyway - the three are pretty different. Trying to be as neutral as possible!:
3scale uses NGNIX and/or open source code plugins to enforce all of the API traffic rules and limits (rate limits, key security, oauth, analytics, switching apps on and off etc.) and the traffic always flows directly to your servers (not via the cloud) so you don't have additional latency or privacy concerns. Because it's NGNIX it's also widely supported, very fast and flexible. Then it has a SAAS backend that manages all the analytics, rate limits, policies, developer portal, alerts etc. + synchronizes across all the traffic manager nodes. It's free to use up to nearly 5million API calls per month.
WSO2's system is an additional module to the WSO2 ESB so if you're using that it makes a lot of sense. It runs everything locally with no cloud components - a pro or a con depending on how you see it. It's also been around a lot less time and doesn't have such a large userbase.
Mashery has two systems - the main one with which the API traffic flows through Mashery's cloud systems first and has traffic management applied there. So there is always a latency heavy roundtrip between the users of the API and your servers + it means Mashery is in your API traffic critical path. They also have an on premise traffic manager but it's much less widely used. Both solutions have very significant costs and long term commitments.
As 3scale what we see as the main advantage is you have a tons of control as to how you set up all the traffic flow and never have to route through a third party plus you have the benefit if having all the heavy lifting hosted and synchronized across multiple data centers. We're also committed to having a strong free for ever tier of service since we want to see a lot of APIs out there! http://www.3scale.net/
Good luck with your choice!
steve.

Can the way a site is coded affect how much we spend on hosting? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Our website is an eCommerce store trading in ethically sourced loose diamonds. We do not get much traffic and yet our Amazon bill is huge ($300/month for 1,500 unique visits). Is this normal?
I do know we are daily doing some database pulling twice from another source and that the files are large. Does it make sense to just use regular hosting for this process and then the Amazon one just for our site?
Most of the cost is for Amazon Elastic Compute Cloud. About 20% is for RDS service.
I am wondering if:
(a) our developers have done something which leads to this kind of usage OR
(b) Amazon is just really expensive
IS THERE A PAID FOR SERVICE WHICH WE CAN USE TO ENSURE OUR SITE IS OPTIMISED FOR ITS HOSTING - in terms of cost, usage and speed?
It should probably cost you around 30-50 dollars a month. 300 seems higher than necessary.
for 1500 vistors, you can get away with using an m1.small instance most likely
I'd say check out the AWS trusted advisor service that will tell you about your utilization and where you can optimize your usage, but you can only get that with AWS Business support (100/month). However considering your way over what is expected, it might be worth looking into
Trusted advisor will inform you of quite a few things:
cost optimization
security
fault tolerance
performance
I've generally found it to be one of the most useful additions to my AWS infrastructure.
Additionally if you were to sign up for Business support, not only do you get trusted advisor, but you can ask questions directly to the support staff via chat, email, or phone. Would also be quite useful to help you pinpoint your problem areas.