Best practice: Multiple django applications on a single Amazon EC2 instance - django

I've been runnning a single django application on Amazon EC2 using gunicorn to serve the django portion and nginx for the static files.
I'm going to be starting new project soon, and wondering which of the following options would be better:
A larger amazon EC2 instance (Medium) runnning multiple django applications
Multiple smallers EC2 instances (Small/Micro) all running their own django applications?
Would anybody have any experience with this? What would the relevant performance metrics I could measure to get a good cost to performance ratio?

The answer to this question really depends on your app I'm afraid. You need to benchmark to be sure you are running on the right instance type. Some key metrics to watch are:
CPU
Memory usage
Requests per second, per instance size
App startup time
You will also need to tweak nginx/gunicorn settings to make sure you are running with a configuration that is optimised for your instance size.
If costs are a factor for you, one interesting metric is "cost per ten thousand requests", i.e. how much are you paying per 10000 requests for each instance type?

I agree with Mike Ryan's answer. I would add that you also have to evaluate whether your app needs a separate database. Sometimes it makes sense to isolate large/complex applications with their own database, which makes changes and maintenance easier. (Also reduces your risk in the case that something goes wrong). Not all of your user base would be affected in the case of an outage. You might want to create a separate instance for these applications. Note: Django supports multiple databases in one project but, again, that increases complexity for changes and maintenance.

Related

Recommended way to run a web server on demand, with auto-shutdown (on AWS)

I am trying to find the best way to architect a low cost solution to provide an on-demand web server for a certain amount of time.
The context is as follows: I have some large amount of data sitting on S3. From time to time, users will want to consult that data. I've written a Flask app that can display the data in a nice way for them. Beign poorly written, it really only accepts a single user session at the time. Currently therefore they have to download the Flask app and run it on their own machine.
I would like to find a way for users to request a cloud-based web server that would run the Flask app (through a docker container for example) on-demand, and give them access to it quickly, without having to do much if anything on their own machine.
Every user wanting to view the data would have their own web server created on demand (to avoid multiple users sharing the same web server, which wouldn't work with my Flask app)
Critically, and in order to avoid cost, the web server would terminate itself automatically after some (configurable) idle time (possibly with the Flask app informing the user that it's about to shut down, so that they can "renew" the lease).
Initially I thought that maybe AWS Fargate would be good: it can run docker instances, is quite configurable in terms of CPU/disk it can get (my Flask app is resource-hungry), and at least on paper could be used in a way that there is zero cost when users are not consulting the data (bar S3 costs naturally). But it's when it comes to the detail that I'm not sure...
How to ensure that every new user gets their own Fargate instance?
How to shut-down the instance automatically after idle time?
Is Fargate quick enough in terms of boot time?
The closest I can think is AWS App Runner. It's built on top of Fargate and it provides an intelligent scale out mechanism (probably you are not interested in this) as well as a scale to (almost) 0 capability. The way it works is that when the endpoint is solicited and it's doing work you pay for the entire fargate task (cpu/memory) you have selected in the configuration. If the endpoint is doing nothing you only pay for the memory (note the memory cost is roughly 20% of the entire cost so it's not scale to 0 but "quasi"). Checkout the pricing examples at the bottom of this page.
Please note you can further optimize costs by pausing/starting the endpoint (when it's paused you pay nothing) but in that case you need to create the logic that pauses/restarts it.
Another option you may want to explore is using Lambda this way (which would allow you to use the same container image and benefit from the intrinsic scale to 0 of Lambda). But given your comment "Lambda doesn’t have enough power, and the timeout is not flexible" may be a show stopper.

What is an instance and how do I convert this to $

You'll have to excuse my ignorance on this one...but honestly, I've had a hard time finding clarity on this. That being said, I'm looking for a non technical answer...something in layman's terms!
Anyways, I've been playing around building a web app (first time obviously) and I'm getting to the point where I've started looking into hosting services. A quick google search and a few blogs later, I thought AWS would be a good place to start, since they give a free-year trial. I don't care about speedy upstarts or other hosting serves, so save your key strokes on offering other services.
My question is based on the fact that AWS charges "Linux Usage per hour" and they also use this term "instance". Yeah...an "instance" is an "object", which is also above my head (probably the real source of the problem), but that was the extent I was able to learn via a google search. That being said, I don't know how to translate the cost into a ball park cost. Yes, I can probably use the trial to help monitor predictable costs, but I don't want to go through the effort of learning one hosting companies system just to find out it's not going to work in the end.
OK...so hopefully by now you see where I'm coming from. What is an "instance" and how do I use the "Linux Usage per hour" to estimate cost? Is an instance a server? For example if I start NGINX is that in instance? Is it just one instance running NGINX or does every VPN represent an instance? If I have 100 people calling the server at once, can they fit on one instance? If I start another server say, Apache or Node, does that become another instance? If I connect to a database, is that an instance? Do instances start as needed? Yes, I know, that's more than one question...I'm just trying to express my confusion.
If I'm suppose to choose a pricing model from this list, "Linux Usage per hour", I need to know what them mean by "Linux Usage". If it's based on an "instance", I need to know what that is. So please, in layman's terms, help clear this up. Maybe some examples or analogies, but no deep technical stuff.
This is more a side note, but I was reading this article and it said
For a client needing to run 800 virtual instances, the annual cost of
a private cloud came to below $400,000 vs. somewhere between $800,000
and $1.2 million for public cloud services.
Considering I don't know what an instance is, that kinda made me a bit nervous...WAAAAAAyyyyyy outta my price range! Yes, it's obviously a big company, but can you imagine "hitting the lottery" with an app everyone loves then before you know it, AWS hits you with a bill of $1,000,000. Or even worse, your security sucks and someone spawns millions of these "instances"...help alive my paranoia!!
Basically, an instance is a virtual machine, which looks very much like a server. As such it's running an operating system - e.g. linux - which is capable of running many programs (aka 'processes' or sometimes, 'services') at the same time.
To go through your questions (some of the explanations below are not technically accurate, but are hopefully more explanatory for it - if anything is obvious or already known, apologies - trying not to assume any knowledge)
An instance is an object
This definition is coming up in your searches because 'instance' has many definitions in different situations. If you see the definition of 'instance' as an object, it's from the topic of object oriented programming languages - you define a class in your code (kind of like a 'template'), and then create instances of the class - kind of like real copies of the template.
Amazon borrowed the term to be analogous - because in the 'cloud' world, you can create an AMI (Amazon Machine Image - the template) and then create lots of instances that are copies or clones of that template.
Is an instance a server?
In terms of what you can do with it, yes, it's a server.
(Technically it's a virtual server - Amazon runs multiple virtual servers on each physical server.)
how do I use the "Linux Usage per hour" to estimate cost?
Estimate how long you will have your instance running for in hours per month, multiply it by cost per hour and you will have your estimated cost per instance per month.
e.g. - one instance always turned on would be - 24 hrs * 31 days = 744 hours. At $0.013/hr (for a t2.micro) that would be 744 * $0.013 = $9.672/mth.
(And that's the reason the free tier gives you 750 hours of instance time per month.)
Instances come in different types and sizes and each size costs a different amount. If you are not sure what size you need, I'd start with the smallest until you discover you need more - which would be when your program starts running too slowly.
For example if I start NGINX is that in instance?
Nginx is a program that runs as a daemon in linux terms - a program that runs in the background so it's always on. It will be one of the many programs running on the server (aka the instance)
If I have 100 people calling the server at once, can they fit on one instance?
It depends - on how big your instance is, and how efficient the program is that is responding to their requests. If you are just getting started learning to program websites, I wouldn't worry about handling 100 people issuing requests to the server all at once just yet - walk before you run :) (also, even when there are 100 people visiting your website, the odds that all of them issue a request at exactly the same time is low - usually they load a page and read it - while they're reading it, some of the other people are loading other pages, and it all spreads out so you might only have ~10 page requests actively being processed by your server at the same time.)
However, if you have 2,000 people on your site at the same time, you might be processing 200 page requests at once, so by then you do need to have put some thought into performance and scalability.
(Note: these numbers are arbitrary and depend entirely on the type of site and it's traffic patterns.)
Generally, most websites pick a mid-level instance size, and then to handle more requests they 'scale out' - create lots of copies of that instance, and allow each instance to handle a portion of the traffic.
If I start another server say, Apache or Node, does that become another instance
The language to use here would be 'start another service say, Apache or Node' - they are other programs, and your instance will be perfectly fine running nginx, apache and node all at the same time. Although each will consume some of the resources (e.g. memory and cpu) and the more activity they are doing, the faster you will run out of resources and need to get a bigger instance size
So - no, they don't automatically become another instance. The language is confusing because sometimes people don't distinguish between the 'server' (aka the instance) and the service (aka the program) and will say the 'apache server' and the 'apache service' interchangably.
If I connect to a database, is that an instance?
Your instance, as a fully capable server, could run a database service on it at the same time as the other services - e.g. you could install and run mysql on your instance.
There is another option, though - if you use the AWS RDS product, then you will be starting an RDS instance. An RDS instance is different from an EC2 instance (what we've been talking about so far) in that RDS instances are specialised to just run the database service and nothing else, but EC2 instances are general servers that you can do pretty much anything on.
It's usually recommended to use RDS, but if you are trying to save money and aren't serving many users, there's nothing particularly wrong with installing mysql on your instance yourself (especially while you're learning how it works) and then moving your data to an RDS instance when you want to support more load or traffic.
Do instances start as needed?
Not by default, no - you have to manually start and stop them.
However, there are options other than manually starting and stopping. Amazon provides a lot of APIs, so you could write a program that would connect to the API and automatically start and stop your instance(s) based on rules you build into your program..
Also, Amazon offers a product called "AutoScalingGroups" which allows you to have a related group of instances and for Amazon to automatically start and stop them according to rules that you configure into that product. These rules can be 'scheduled actions' - start/stop at certain times of day - or they can be reactive - e.g. when the average CPU usage is > 50% for more than 5 minutes, start another instance.
This is more a side note, but I was reading this article and it said
For a client needing to run 800 virtual instances, the annual cost of
a private cloud came to below $400,000 vs. somewhere between $800,000
and $1.2 million for public cloud services.
The 'free tier' gives you a t2.micro sized instance (1 vCPU, 1 GiB RAM) which you could leave turned on permanently for free during that free year.
Even after your free tier expires, that same instance would cost you $9.67/mth, and you have the option to go downgrade to a t2.nano (0.5 GiB RAM) which would only cost ~$4/mth - but 0.5GiB RAM isn't much these days, so may not be enough for you.
A t2.micro should be more than enough to learn how to build websites on. If you are fortunate enough to build a site that is popular enough that you are getting more requests than that server can handle, then you will have to decide if you can generate revenue from that popularity sufficient to cover the cost, but by then you'll have more of a sense of how efficient your program is, and what instance size (and/or how many instances) you'll need.
Yes, it's obviously a big company, but can you imagine "hitting the
lottery" with an app everyone loves then before you know it, AWS hits
you with a bill of $1,000,000
AWS protects you from yourself here a bit - they have limits which generally restrict you from running more than 20 instances at a time - unless you ask for permission. So, by default, your instance won't go multiplying like rabbits on it's own - unless you set it up to. And even if you have set it up to, it won't be able to grow beyond 20 instances unless you have asked amazon to let you. So, worst case is 20 x $9.67/mth - $197/mth.
But - that's just the instance cost. Amazon charges you for lots of things including data traffic in and out, RDS instance costs, and if you start using other service such as S3 buckets and/or elastic load balancers, they all attract their own costs.
But hopefully, if you hit the lottery with an app everyone loves, you've worked out how to convert that love into dollars and cents so you can pay for all those instances you're going to need :)

I need feedback on this partly serverless architecture design

I want to host a scalable blog or application of this sort in nodeJS on AWS making use of AWS technologies. The idea here is to have a small EC2 server that is not responsible for serving the website, but only for running the CMS/admin panel. While these operations could be serverless as well, I think having a dedicated small VM EC2 instance could be more efficient, and works better with existing frameworks, etc.
In my diagram above, you can see there's two type of users audiences and admin/writers. Admin CRUD operations also cause lambda to run. Lambda generates the static site after Admin changes, which is delivered to S3. Users are directed to the static site hosted in S3. Only admins/writers have access to the server-connecting part of the site.
I think this is a good design for an extremely scalable and relatively cheap site, as long as the user-facing side is all static. An alternative to this is a CDN, but then I have to deal with cache invalidation issues, a site that updates slower, and a larger server.
This seems like a win-win to me. Feedback?
This ought to be a comment rather than an answer, but as I don't have enough points...
There are a couple of other considerations for this architecture. Lambda functions are great for scaling out microservices horizontally with each small function being executed in parallel tens or hundreds of times. Generation of a static site is typically a single threaded operation so you may not see the gains you expect, you'll also need to watch the timeout period (maximum 300 seconds currently) and make sure that you can generate the site in that time. Of course if you are not running Lambda code you are not getting charged.
For your admin frontend I would suggest ElasticBeanstalk, even if you peg it at a single instance, it gives you lots of great features like rolling updates.
Good luck with the project.

Separate server for Memcache/Redis?

I am using Django for my project and I ll be hosting it on Linode or any other hosting service. Plus if I want to use memcache will I require a new Linode for it? Means just one server will be ok or I ll have to host my site on 2 servers, one for memcache and one for django? And is it the same for Redis? Also will I require a separate server for Mysql?
I don't think you understand that nobody is a fortune telling wizard. Nobody knows how many requests you will receive per second, nor how cpu/memory intensive each request will be. Nobody knows how optimized your code is. Nobody knows if your application is read heavy or write heavy. Your use case is your own, and your probably the only one who estimate it.
My only actual advice to you is to try to estimate your server data and sever load and benchmark your setup on one machine. If you are unsatisfied with the performance then scale up. You can either scale up vertically, by increasing the size of your linode, or scale horizontally by adding more linode instances. In the latter case, you will most likely put your DB on a machine of it's own and have multiple django instances fed by a load balancer. These Django instances could each share the same memcache on a machine, or they can each have their own memcaches on their own machine. Which one is better? I can't tell you. It again depends on your use case.
If I were you, I would set it all up on one linode instance. I would create test data that I assume would be close to real world. Then I would try to test my response times with an estimated number of requests per second. I would measure response times, cache hits, and memory usage. I would then decide based on that if my use case is satisfied with this level of performance or not because I'm really the only one who would know what is satisfactory performance. Additionally, adding more linode resources is not necessarily where I would first try and improve performance.
Some great tips on optimizing and benchmarking can be found here:
https://docs.djangoproject.com/en/1.8/topics/performance/
http://blog.disqus.com/post/62187806135/scaling-django-to-8-billion-page-views
http://scottbarnham.com/blog/2008/04/28/django-performance-testing-a-real-world-example/
Late night reading about scaling up Django can be found in many books, I like this one:
https://highperformancedjango.com/
Sorry if I sound a bit blunt, I just want you to understand that nobody can walk in here and give you an answer with a large degree of confidence. This question doesn't have a straight-forward answer.
TL;DR Start with one instance and scale up only if you've convinced yourself you need to.
You say Memcached or Redis, so I assume Redis would be deployed without persistence, with a purely in-memory configuration.
In such case both Memcached and Redis are unlikely to get saturated even if you run them in one server, since the limiting factor is more likely to be a single Django instance if your requests/second go high.
However you should make sure to have enough memory and to configure an appropriate max memory usage for Memcached / Redis (different ways to accomplish this in the two different services). Note that under memory pressure, the Linux OOM killer may kill your cache otherwise, so if you go for a single instance, which seems to me a sensible first step, make sure your Django memory usage plus the memory you allocate for caching, are not enough to go near the limits of the instance free memory.
CPU is hardly going to be an issue as I said since Memcached / Redis are pretty good at using little CPU, so I can't foresee a setup where Django is ok serving pages but the instance is in trouble since the CPU is burned by the cache.

AWS autoscaling an existing instance

This question has a conceptual and practical parts.
Conceptually I'd like to know if using the autoscaling functionality is equivalent to simply increasing the compute power by a factor of the number of added instances?
Practically ... how does this work? I have one running instance, its database sitting on an LVM composed of multiple EBS volumes, similarly with all website data. Judging from the load on the instance I either need to upgrade to a more powerful instance or introduce this autoscaling. Is it a copy of the running server? If so, how is the database (etc) kept consistent?
I've read through the AWS documentation, and still haven't got the picture yet - I could set one autoscaling group up which would probably clear my doubts, but I am very leery to do this with a production server.
Any nudges in the right direction would be welcome.
Normally if you have a solution that also uses a database, and several machines in the solution, the database is typically not on any of the machines but is instead hosted seperately with each worker machine pointing to the same database - if you are on AWS platform already, then DynamoDB or RDS are both good solutions for this.
In theory, for some applications, upgrading the size of the single machine will give you the same power as adding several smaller machines, but increasing the size of the single machine, while usually these easiest thing to do at first, should not be considered autoscaling and has its own drawbacks. Here are some things to consider:
Using multiple machines instead of one big one gives you some fault tolerance. One or more machines can go down and if your solution is properly designed new machines will spin up to replace them.
Increasing the size of a single machine solution means you are probably paying too much. If you size that single machine big enough to handle peak workloads, that means at other times (maybe most of the time), you are paying for a bigger machine than you need. If you setup your autoscaling solution properly more machines come on line in response to increasing demand, and then they terminate when that demand decreases - you only pay for the power you need when you need it.
When your solution is designed in this manner, you need to think of all of the worker machines as ephermal - likely to disappear at any time, so you need to build your solution differently. Besides using a hosted database (like on DynamoDB or AWS RDS), you also should not store any data on the machines in your auto-scaling group that doesn't also live somewhere else. For example, if part of your app allows users to upload images, you don't store them on the instances, you store them in S3. Same would apply to any other new data that comes in.
You need to be able to figuratively 'pull the plug' at any instant on any of the machines in your ASG without losing data.
Ultimately a properly setup auto-scaling solution will likely serve you better, but without doubt it is simpler to just 'buy a bigger machine' and the extra money you spend on running that bigger machine may be more than offset by the time and effort you don't have to spend re-architecting your solution to properly run in an autoscaling environment. The unique requirements of your solution will ultimately decide which approach is better.