AWS - EC2: Why would I need more than one instance? [closed] - amazon-web-services

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Sorry if there is an obvious answer to this, but I'm currently in the process of setting up a new company, from where I'll be hosting client websites. Rather than use an external hosting company, I'd like to take full control of this through EC2.
Can multiple websites be hosted on a single instance, or will each new site require it's own instance?
Many thanks,
L

Multiple websites can be hosted on one instance, given that the instance is large enough to handle all the traffic from all the different websites.
Here are two main reasons you would use more than one EC2 instance:
Load: A single instance would not be able to handle the load. In this case you would want to start up multiple servers and place them behind a load balancer so that the load can be shared across them. You might also want to split out each site into separate clusters of EC2 servers to further distribute the load.
Fault tolerance: If you don't design your system with the expectation that an EC2 instance can and will disappear at some point, then you will eventually have a very unpleasant surprise. With your site running on multiple servers, spread out across multiple availability zones, if a server or even an entire AZ goes down your site will stay up.

You don't say if each client will require the same code base or if each client will have a different site, but modularity is also important.
What happens if one client requires a different AMI. Say one client requires some special is package for the server. You don't want to keep updating everybody's app every time you have a new client requirement.
So, multiple instances will allow you to scale each customer at different times and rates and will allow you to develop each solution without affecting each other.
Pricing will also be cheaper as you can use auto scaling to be very efficient about CPU used at any given time, compared to a big instance where you will need to estimate future use.
In short, the biggest value of the cloud is elasticity and modularity, so use that in your favor.

In addition to what Mark B said in his answer about load and fault tolerance, having multiple instances allows you have them in different regions of the world. This is helpful if you have requirements concerning the legality of where the data can be stored or more usually about the latency between the data and the user-application. Data stored in an EU region is going to have much less latency for EU users than data stored in a NA region.

Related

GCP - Rolling update max unavailable [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 months ago.
Improve this question
I am trying to understand the reasoning behind the GCP error message.
To give you the context,
I have 3 instances running 1 instance per zone using managed instance group.
I want to do an update. I would like to do the update one by one. So max unavailable should be 1. However GCP does not seem to like it.
How to achieve high availability here if I give max unavailable 3?
The reasoning behind the error is because when you initiate an update to a regional MIG, the Updater always updates instances proportionally and evenly across each zone, as described in the official documentation. If you set the number of instances lower than the number of zones, then the update could not be proportionally and evenly across zones.
Now, as you said, it does not make much sense from the high availability stand point; but this is because you are keeping the instance names when replacing them, and this forces the Replacement method to be RECREATE instead of SUBSTITUTE. The Maximum Surge for the RECREATE method should be 0 and that is because the original VM should be terminated before the new one is created in order to use the same name.
On the other hand, using the SUBSTITUTE method allows configuring a maximum surge that will be enforced during the update process, creating new VMs with a different name before terminating the old ones, and thus always having VMs available.
The recommendation then is to use the SUBSTITUTE method instead to achieve high availability during your Rolling Updates; if for some reason you need to preserve the instance names, then you can achieve high availability by instantiating more than 1 VM per zone.
I don't think that's really achievable, in your context since there is only 1 instance per zone.. in a managed instance group, it would not be highly available if 33% of your instances would be unavailable, so rather it will be 99% and after the update the high availability is on again.
I would suggest giving a good good read to [1] in order to properly understand how MIGs availability is defined on GCP, essentially you could of have had 2 2 2 and then have 2 2 2 update and again 2 2 2.
Also please check [2] As it's a proven example of my 33% statement above.
[1]
https://cloud.google.com/compute/docs/instance-groups/regional-migs#provisioning_a_regional_managed_instance_group_in_three_or_more_zones
[2]https://cloud.google.com/compute/docs/instance-groups/regional-migs#:~:text=Use%20the%20following%20table%20to%20determine%20the%20minimum%20recommended%20size%20for%20your%20group%3A

Should I create one instance for all my applications or should I use one for each? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm very new to Iot project and programming. I've got an ESP01 with a simple accelerometer and I'd like to send its data to a database. Now I already tested locally on my machine all of this and everything works fine.
This is what I'm using
Esp01 + accelerometer sending data to mqtt broker mosquitto running locally on my machine, a node-red app which send those data to Influxdb and then grafana to display those data.
Now I'd like to host everything on a cloud server AWS. I found out that I should create an AWS EC2 instance and install Node red and MQTT on an Ubuntu machine. The big question form me is
"Should I create one instance for Node red and one instance for MQTT broker or I can use just one instance and install both of them on it?"
Last but not least question should I do the same for influxdb?
It all depends on money. If you strictly followed the AWS guidelines they would tell you that you should have a load balancer and at least two m-type instances and each service probably should have its own pair of instances. That way - if one node fails, there will be no downtime. That setup can cost you maybe around $100/month.
But for this kind of project (just for yourself) I would go with just one small (the smallest possible) instance and I would put everything on it. It's good to know that in AWS any instance can fail in any time. It's rare, but when you have thousands of instances, it happens on regular bases. Therefore - the simplest solution is to create your setup with Cloudformation and have a regular backup. Possibly you can create an autoscaling group with just one instance. if it fails, it will get automatically replaced and you can (automatically or manually) recover the data from your backup. For example during the boot it can look for the last backup in s3 and install it (via the script in meta-data).
This is not perfect, but for sure it is acceptable for a hobby project. And it is cheap.
It depends on variety of factors, including risk profile. If its just for experiments and development, it probably is fine to have one instance having everything. If it's something you're providing as a service for (paying) customers, then it's better to have them distributed across multiple instances to provide for resiliency.
Mosquitto and Node-RED will both run happily on a single instance. And even a nano instance is probably more than adequate for hobby level use, a Lightsail instance will probably even be more than enough.
As a side note you REALLY need to make sure you enable the admin authentication for Node-RED before placing it on an internet facing machine.

AWS Server Size for Hosting [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am looking into purchasing server space with AWS to host what will eventually be over 50 websites. They will have many different ranges of traffic coming in. I would like to know if anyone has a recommendation on what size of server that would be able to handle this many sites.
Also, I was wondering if it's more cost effective/efficient to host an separate EC-2 instance for each site or too purchase a large umbrella server and host all sites on a single instance?
Thanks,
Co-locating services on single/multiple servers is a core architecting decision your firm should make. It will directly impact the performance, security and cost of your systems.
The benefit of having multiple services on the same Amazon EC2 instance is that they can share resources (RAM, CPU) so if one application is busy, it has access to more total resources. This is in contrast to running each one on a separate instance, where there is a smaller, finite quantity of resources. Think of it like car-pooling vs riding motorbikes.
Sharing resources means you can probably lower costs, since you'll need less total capacity.
From a security perspective, running on separate instances is much better because they are isolated from each other. You should also investigate network isolation to prevent potential breaches between instances on the same virtual network.
You should also look at the ability to host all of these services using a multi-tenant system as opposed to 50 completely separate systems. This has further benefits in terms of sharing resources and reducing costs. For example, Salesforce.com doesn't run a separate computer for each customer -- all the customers use the same systems, but security and data is kept separate at the application layer.
Bottom line: There are some major architectural decisions to make if you wish to roll-out secure, performant systems.
The short correct answer:
If those sites are only static(html, css and js). EC2 won't be necessary because you can use S3 and it will be more cheap and you won't have to worry about scaling.
But if those sites have a dynamic part like php, python and similar. Well it is a different story.

How to reduce database latency on AWS [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 months ago.
Improve this question
I need to run my website from two different countries, but the database should be in any one country. How can I improve my database latency to be accessed from cross region.
It is best practice to always keep your database as close as possible to your application to ensure low-latency connections. It is a bad idea to separate them into different places around the world.
One idea:
Only run one application server (in the same location as your database), rather than two. Reduce application latency by using Amazon CloudFront to cache static content closer to your users.
If you really must separate the database from the application server:
Create a Read Replica of the database in the same region as your application. Note that this will be a read-only copy of the database, so your application will need to send updates to the master database in the other region. Fortunately, most database access is for Reads.
Alternatively, use a local cache server (eg Amazon ElastiCache) in your remote region. Consult the cache before going to the database. This is similar to the Read Replica scenario.
All of these options avoid the scenario where the database is separated from the application server.
Network latencies cannot be predicted. During peak hours it will definitely impact the application.
Consider creating read replicas , which is in the one country and keep the master in the other
If you can't push your database to multiple regions (by using read replicas for example), then you should consider using cloudfront in front of your website to allow for caching of requests in the various regions you care about when possible.
This won't technically improve the latency to the database, but in terms of your users perception of performance it may have the same end result by not requiring a round trip to the db server for every request.
If you make a lot of the same queries, PolyScale is a great alternative to a read-replica or creating your own cache. You can connect your database to PolyScale and then just update your application to make requests to the PolyScacle cache rather than the database directly. This eliminates the cost and complexity of a read replica and avoids the challenges of determining what to cache and TTL for a cache you write.
You can read this article about Database Read Replicas vs PolyScale Database Edge Caching.

Mashery vs WSO2 vs 3scale [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I would like to know the differences between Mashery, WSO2 and 3scale. Someone who has used API Managers before can give his opinion? What are advantages and disadvantages of each one
thanks
cheers
Not sure, but this question might end up flagged as off topic - vendor comparison, but anyway I'll jump in. I work at 3scale (full disclosure) but hopefully this is useful anyway - the three are pretty different. Trying to be as neutral as possible!:
3scale uses NGNIX and/or open source code plugins to enforce all of the API traffic rules and limits (rate limits, key security, oauth, analytics, switching apps on and off etc.) and the traffic always flows directly to your servers (not via the cloud) so you don't have additional latency or privacy concerns. Because it's NGNIX it's also widely supported, very fast and flexible. Then it has a SAAS backend that manages all the analytics, rate limits, policies, developer portal, alerts etc. + synchronizes across all the traffic manager nodes. It's free to use up to nearly 5million API calls per month.
WSO2's system is an additional module to the WSO2 ESB so if you're using that it makes a lot of sense. It runs everything locally with no cloud components - a pro or a con depending on how you see it. It's also been around a lot less time and doesn't have such a large userbase.
Mashery has two systems - the main one with which the API traffic flows through Mashery's cloud systems first and has traffic management applied there. So there is always a latency heavy roundtrip between the users of the API and your servers + it means Mashery is in your API traffic critical path. They also have an on premise traffic manager but it's much less widely used. Both solutions have very significant costs and long term commitments.
As 3scale what we see as the main advantage is you have a tons of control as to how you set up all the traffic flow and never have to route through a third party plus you have the benefit if having all the heavy lifting hosted and synchronized across multiple data centers. We're also committed to having a strong free for ever tier of service since we want to see a lot of APIs out there! http://www.3scale.net/
Good luck with your choice!
steve.