I need to serve a live stream to more than 10K users. Checking Adobe website it says that one EC2 instance of type m2.2xlarge is able to serve to just 10K users so I've some questions:
Does CloudFront allow to more users to connect than those 10K users allowed by the EC2 instance acting as a multiplexor of the original tream?
And based on the response of the above question:
If CloudFront allow more users to connect then, why should anyone need one m2.2xlarge EC2 instance if one with lower specs could do the same job and let CloudFront to multiplex the live stream?
If CludFront doesn't allow more users to connect than those 10K, what kind of architecture do I need? CloudFront + ELB + 2 or more EC2 instances with AMS installed a nd connected to another small EC2 instance with AMS installed which gets the stream from the live event?
CloudFront acts as a caching layer, for each edge location. If the content is not available at the edge location, it connects to EC2, retrieves the data and passes it on. So as far as I know, if using CloudFront, you shouldn't need such a large EC2 instance.
I've tested this extensively with static resources, I didn't need it for live streaming yet, but the same principles should apply.
This post on the AWS website from 2012 seems to confirm my hypothesis: http://aws.amazon.com/about-aws/whats-new/2012/03/29/amazon-cloudfront-improves-live-streaming-support-with-adobe-fms/
So basically, as long as the EC2 instance is strong enough to stream to all CloudFront edge location simultaneously, you should be fine.
Related
I need to host a service with rest-api on a server which does below listed tasks:
Download and upload files in s3 bucket
Run some cpu intensive computations
Return json response
I know an ec2 instance will be a better approach to host my service but given price differences between workspace and ec2 instance, I am exploring this route. Are there any limitations on amazon workspace that might prevent me from using them for my use case?
I came across ngrok which I believe can help me direct requests over the internet to my workspace local server.
Has anyone played around with it and could add some suggestion?
AWS terms of service do not allow you to do that I’m afraid. See section 36 on workspaces.
http://aws.amazon.com/service-terms/
36.3. You and End Users may only use the WorkSpaces Services for an End User’s personal or office productivity. WorkSpaces are not meant to accept inbound network connections, be used as server instances, or serve web traffic or your network traffic. You may not reconfigure the inbound network connections of your WorkSpaces. We may shut down WorkSpaces that are used in violation of this Section or other provisions of the Agreement.
I suggest you use an r5a.xlarge for the lowest cost 32GB RAM instance type (it’s AMD processor is cheaper than r5 on intel). Investigate whether spot instances would work if your state persists on S3 and not in the local instance, otherwise if you need it for at least a year reserved instances are discounted over on demand pricing.
I have virtual host files(sites) setup on two linux EC2 instances which are behind an ELB. I would like to know the data transfer rate of one particular site which is hosted on these EC2 instances. There are almost 30 virtual hosts on each EC2 instance and I need to calculate the average data transfer rate of all these sites. From cloudwatch I could only gather the information at service level but not for particular site. Is there any way to accomplish this?
In this case, I recommend 2 things:
Do not use ELB or EC2 directly for data transfer.
If you want to know the exactly MB/GB/TB:
Work with CloudFront in front of your ELB.
Register one CloudFront distribution for each site (domain, subdomain or whatever).
If you do that, you will have more control and save money (data transfer) but it depends on the region (sometimes it can be a bit more expensive).
AWS seems a little daunting with too many overlapping services so I'm looking for some advice and direction.
We have a mobile app for which we've developed a sync server (i.e. user will sign-up, sync data kept on AWS). Currently we've setup an EC2 instance with a web server, Django end-points and a postgres server. However we need the following:
Ensure the service is available from different regions of the
world for faster access
If that requires putting the postgres server outside of the EC2, what service do we need and how would replication work?
We will have larger file attachments stored on S3 separately, but need to do this securely and encrypt the files
Eventually we will host a web-app (i.e. an Angular 2 app) that would
connect to the same database.
We also would need to do all this in the most economical way and then scale up as the load increases.
Please any guidance would be appreciated. I'm struggling with terminologies at the moment. We also setup an Amazon SSL Certificate however that requires an Elastic Load Balancer but we only have one EC2 instance. What do we do to get this all working securely?
Based on the information provided, I would recommend you to start with AWS Elastic Beanstalk, where it will manage autoscaling and loadbalancing while providing you with a DNS URL for external domain mapping.
To ensure that the service is available from different regions for faster access, you can cache the static Angular App using Cloudfront. Then you will be able to add SSL Certificate to Cloudfront instead of ELB. If you plan to create multiple environments for different regions, you can use Route53 for geo based routing.
To take Postgres server outside EC2, you can use AWS RDS and it supports synchronous replication with fail-over for Multi-AZ deployments and also Postgres in RDS also supports Cross Region Replication if you plan to setup multiple deployment environments in different regions. Also you can create Read Replicas to improve reading speeds which will be asynchronously replicated.
You can encrypt the files in S3 using AES256 using Keys from KMS or from your client and I would recommend using Signed URLs with Cloudfront in front of S3 serving these files, so that clients can securely and directly access them improving the performance by getting advantage from distributed caching.
You can host the Angular App in AWS S3 and Cache using Cloudfront for faster access. Another option is to cache the static asset path in Cloudfront so that subsequent requests for static assets will be served from Cloudfront.
FAQs from Amazon
Who should use AWS Elastic Beanstalk?
Those who want to deploy and manage their applications within minutes
in the AWS Cloud. You don’t need experience with cloud computing to
get started. AWS Elastic Beanstalk supports Java, .NET, PHP, Node.js,
Python, Ruby, Go, and Docker web applications.
Your current environment isn't scalable (either load-responsive or to another region). If you need scalability then it should be re-arranged. It is difficult to provide you with details because the required environment depends on the applications architecture, however there are some suggestions:
DB: For better stability multi-AZ RDS setup for the DB is recommended. Benefit is RDS is fully managed service so you don't need to worry about replication, maintenance etc.
Web/app servers: you can deploy a copy in any region you want and connect to the same DB.
S3: you can enable crosss-region replication as well as encryption, but make sure it is used wisely (e.g. files are served to the client from bucket in closest region)
You can set up your own SSL on the server and it does not require ELB. However, you can use ELB with one webnode only.
I do NOT suggest to use Beanstalk because despite it really makes the first steps more easier you may have trouble trying to configure something non-standard in the future (unless you're very well familiar with EBT, of course).
To add efficiency you may want to add CDN (either AWS ot another vendor).
Make sure your environment configuration is really secure. You may need for your team someone who is familiar with AWS because every topic can be converted to a separate article.
I have a system set up on AWS where I have a set of ec2 insatnces (as an application server from an elastic beanstalk) running in an auto-scaling load-balanced environment. All this works fine.
I would like to load test this instance in order to obtain results that help me to figure out what more needs to be done to the system in order for it to handle, potentially, millions of users. I have used a tool called Locust (http://locust.io) so far to do this. This allows me to send requests to my instance(s?) through a proxy as desired. However, I cannot tell whether the requests are being routed to multiple instances or the same one constantly; and if they are being load balanced appropriately I can't see how many requests each of the ec2 instances are receiving or their health under load. (I have a feeling that the requests are not being properly load balanced as the failure rate always seems to increase drastically at a similar point every test run.)
Is there a way to get this information inside from the AWS ec2 or elastic beanstalk consoles, or is there a better distributed web based load testing tool that can provide the data I need?
There are two ways to get this information
1) Create S3 Bucket and save ELB logs. You can filter these logs to check which instance is serving your request
2) Retrieve application level logs : If apache/nginx installed on your EC2 instances to serve the request. Filter apache/nginx logs in every machine
Hope it helps !!
There is a way to get this data from the AWS console.
Inside the elastic beanstalk console there is a tab titled health. This tab (in the enhanced health overview) shows the number of requests per second, the response for the requests, the latency, the load average and the CPU utilisation for each ec2 instance being run by the elastic beanstalk.
An example of this data is shown in the following image.
This data allows the system manager to see which of their back-end instances are receiving requests and how many they are each being sent through a load-balancer and a proxy.
This can also be attained from the AWS CLI using:
eb health environment_name
As I understand I can use EC2 as web server for my application. But how load balancer is working?
For example I have one EC2 instance. In this way load balancer will not work. Am I right?
For example I have few EC2 instances. In this way I can configure load balancer to balance between all my EC2 servers.
Am I right?
Application files at all instances should be synced? Is there is any Amazon tool to sync? Or I should use something like rsync or post commit hooks to sync files between EC2 instances?
Is it possible to use one EC2 instance for web application (php + nginx) and for CDN (Cloud Front)? Or what is the better way to reach this: I need to store static files but I should access them from web application (php scripts) through file system. So I am going to use EC2 and Clod Front. But how can I get access?
Thanks for your time.
Technically, the load belancer will work, it's just that it'll only balance the traffic to one instance.
Correct. You register the instances with the elastic load balancer, and whilst those instances are healthy - it will respond to them.
There's many different ways to sync files - it all depends on what you want to sync. Cloud Architecture is a little different to traditional architecture. For example, rather than loading the images onto the EBS volume, you'd try and offload them (and serve them) from S3. Therefore the only things you'd need to "sync" would be the webserver files themselves. You could use CloudFormation to roll out updates, post commit hooks and rsync are also good options. The challenge is to remember that it can scale / fail almost at will - so you need to ensure that each instance knows how to get the information and keep itself updated in isolation.
Yes. It's called a custom origin. What you want to do though is put a url rewrite on the outbound server that rewrites the local urls to cloudfront domains.
Hope that helps