Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to use Amazon ELB plus EC2 for fault tolerance (high availability)
In particular, it is not clear how it supports the following high availability features:
Does it have preemptive migration?
Checkpointing?
Job migration?
Self-detection?
Fault mask?
And is it proactive or reactive?
Does ELB have..
1) preemptive migration I suppose it could be configured to detect early failure somehow by hooking the health checks up to something within your application that could detect an early failure. But it's not part of the design strategy. Nodes are marked as bad and new nodes are brought on, this method isn't part of how AWS is supposed to work. The nodes are thought of as immutable
2) Checkpointing The idea of duplicating data across nodes as part of a regular process isn't part of the AWS high availability vision. The HA of data tends to take place at a database layer, not as data on nodes
3) Job Migration The use of "sticky sessions" allows users to continue with the same data even in the event of a system failure. How the job data is exactly persisted isn't controlled by the ELB.
4) Self Detection In the context of an ELB this is pretty much what the health checks do. But the health checks detect failure in the downstream nodes, one has to imagine the system as being ELB+nodes
5) Fault Mask This is more of a low-level thing, I don't see how it applies to ELB
I suppose that many of your questions would be better addressed as queries about the database layer. AWS RDS has an interesting set of HA capabilites
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm very new to Iot project and programming. I've got an ESP01 with a simple accelerometer and I'd like to send its data to a database. Now I already tested locally on my machine all of this and everything works fine.
This is what I'm using
Esp01 + accelerometer sending data to mqtt broker mosquitto running locally on my machine, a node-red app which send those data to Influxdb and then grafana to display those data.
Now I'd like to host everything on a cloud server AWS. I found out that I should create an AWS EC2 instance and install Node red and MQTT on an Ubuntu machine. The big question form me is
"Should I create one instance for Node red and one instance for MQTT broker or I can use just one instance and install both of them on it?"
Last but not least question should I do the same for influxdb?
It all depends on money. If you strictly followed the AWS guidelines they would tell you that you should have a load balancer and at least two m-type instances and each service probably should have its own pair of instances. That way - if one node fails, there will be no downtime. That setup can cost you maybe around $100/month.
But for this kind of project (just for yourself) I would go with just one small (the smallest possible) instance and I would put everything on it. It's good to know that in AWS any instance can fail in any time. It's rare, but when you have thousands of instances, it happens on regular bases. Therefore - the simplest solution is to create your setup with Cloudformation and have a regular backup. Possibly you can create an autoscaling group with just one instance. if it fails, it will get automatically replaced and you can (automatically or manually) recover the data from your backup. For example during the boot it can look for the last backup in s3 and install it (via the script in meta-data).
This is not perfect, but for sure it is acceptable for a hobby project. And it is cheap.
It depends on variety of factors, including risk profile. If its just for experiments and development, it probably is fine to have one instance having everything. If it's something you're providing as a service for (paying) customers, then it's better to have them distributed across multiple instances to provide for resiliency.
Mosquitto and Node-RED will both run happily on a single instance. And even a nano instance is probably more than adequate for hobby level use, a Lightsail instance will probably even be more than enough.
As a side note you REALLY need to make sure you enable the admin authentication for Node-RED before placing it on an internet facing machine.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am looking into purchasing server space with AWS to host what will eventually be over 50 websites. They will have many different ranges of traffic coming in. I would like to know if anyone has a recommendation on what size of server that would be able to handle this many sites.
Also, I was wondering if it's more cost effective/efficient to host an separate EC-2 instance for each site or too purchase a large umbrella server and host all sites on a single instance?
Thanks,
Co-locating services on single/multiple servers is a core architecting decision your firm should make. It will directly impact the performance, security and cost of your systems.
The benefit of having multiple services on the same Amazon EC2 instance is that they can share resources (RAM, CPU) so if one application is busy, it has access to more total resources. This is in contrast to running each one on a separate instance, where there is a smaller, finite quantity of resources. Think of it like car-pooling vs riding motorbikes.
Sharing resources means you can probably lower costs, since you'll need less total capacity.
From a security perspective, running on separate instances is much better because they are isolated from each other. You should also investigate network isolation to prevent potential breaches between instances on the same virtual network.
You should also look at the ability to host all of these services using a multi-tenant system as opposed to 50 completely separate systems. This has further benefits in terms of sharing resources and reducing costs. For example, Salesforce.com doesn't run a separate computer for each customer -- all the customers use the same systems, but security and data is kept separate at the application layer.
Bottom line: There are some major architectural decisions to make if you wish to roll-out secure, performant systems.
The short correct answer:
If those sites are only static(html, css and js). EC2 won't be necessary because you can use S3 and it will be more cheap and you won't have to worry about scaling.
But if those sites have a dynamic part like php, python and similar. Well it is a different story.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have an iOS application, which hits a backend we've set up on AWS. Basically, we have a Staging and a Production environment, with some basic Load Balancing across two AZs (in Production), a small RDS instance, a small Cache instance, some SQS queues processing background tasks, and S3 serving up assets.
The app is in beta, so "Production" has a limited set of users. Right now, it's about 100, but it could be double or so in the coming weeks.
My question is: we had been using t2.micro instances on Staging and for our initial beta users on Production, and they seemed to perform well. As far as I can see, the CPU usage averages less than 10%, and the maximum seems to be about 25 - 30%.
Judging by these metrics, is there any reason not to continue using the t2 instances for the time being and is there anything i'm overlooking as far as how the credit system works, or is it possible that i'm getting "throttled" by the T2s?
For the time being, traffic will be pretty predictable, so there won't be 10K users tomorrow :)
You just need to watch the CPU credits metric on the instances to make sure you dont get throttled. Set up alerts in CloudWatch for this and you should be fine.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Sorry if there is an obvious answer to this, but I'm currently in the process of setting up a new company, from where I'll be hosting client websites. Rather than use an external hosting company, I'd like to take full control of this through EC2.
Can multiple websites be hosted on a single instance, or will each new site require it's own instance?
Many thanks,
L
Multiple websites can be hosted on one instance, given that the instance is large enough to handle all the traffic from all the different websites.
Here are two main reasons you would use more than one EC2 instance:
Load: A single instance would not be able to handle the load. In this case you would want to start up multiple servers and place them behind a load balancer so that the load can be shared across them. You might also want to split out each site into separate clusters of EC2 servers to further distribute the load.
Fault tolerance: If you don't design your system with the expectation that an EC2 instance can and will disappear at some point, then you will eventually have a very unpleasant surprise. With your site running on multiple servers, spread out across multiple availability zones, if a server or even an entire AZ goes down your site will stay up.
You don't say if each client will require the same code base or if each client will have a different site, but modularity is also important.
What happens if one client requires a different AMI. Say one client requires some special is package for the server. You don't want to keep updating everybody's app every time you have a new client requirement.
So, multiple instances will allow you to scale each customer at different times and rates and will allow you to develop each solution without affecting each other.
Pricing will also be cheaper as you can use auto scaling to be very efficient about CPU used at any given time, compared to a big instance where you will need to estimate future use.
In short, the biggest value of the cloud is elasticity and modularity, so use that in your favor.
In addition to what Mark B said in his answer about load and fault tolerance, having multiple instances allows you have them in different regions of the world. This is helpful if you have requirements concerning the legality of where the data can be stored or more usually about the latency between the data and the user-application. Data stored in an EU region is going to have much less latency for EU users than data stored in a NA region.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have read a couple articles and I know the basic concept of cloud computing..But I still don't know what exactly I can do with this service.
As a mobile application developer, I have developed a couple of iPhone applications. I have a Bluehost account and I have MySQL database in there. I have a couple PHP scripts on my server and on device side, it sends http request to the server to get the data from database in XML format. That is basically how I designed and implemented my applications..
Now what can I do with a cloud computing? If I use a cloud computing service such as AWS then how it is going to change the structure of my application?
Thanks in advance...
Cloud computing doesn't necessarily have to change the structure of your app. The main benefit of cloud computing in a lot of cases is scaling.
Right now if your iPhone apps become really popular and overload your current host, what do you do?
Using the cloud, you could spin up new instances (servers) on demand almost instantly. Another benefit is you only pay for what you need. Of course, depending on the situation, it might require changes to your structure to take advantage of scaling features.
edit: Specific to AWS, they have a service called Elastic Load Balancing. Take a look: http://aws.amazon.com/elasticloadbalancing/