I m running my python base REST services on DJango free hosting site called "pythonanywhere".
So far each web service was ~100ms in response thus my frontend was super fast, but last few days response is drastically changed, now for same REST API I am getting 30 seconds.
With above performance I cannot schedule my demo, I am planning to setup new django/python/mysql base environment for myself.
What are the best ways to host/setup Django based application (with mysql), preferably free hosting but I dont mind spending few bucks for better service.
For production deployment setups the recommended deployment is with wsgi.
StackOverflow is not the right place to solicit blanket recommendations especially since you haven't given any idea of what your expected load/usage is.
If you just need something to run your application "online" quickly; a PaaS provider should provide the quickest ramp up time. I have used heroku before and its very simple to get started.
Related
Apologies in advance for my little knowledge of AWS
I'm trying to draw parallels between my current setup on Heroku to a move to AWS. I've run into some memory issues on Heroku because of some machine learning models I'm running and Heroku seems too expensive for my needs.
I was recommenced to move to aws using fargate which would be a better fit for my app. Below is my whole architecture, I'm hoping for some guidance on my direction of what I have and where I plan to go.
A django application running on heroku.
The base of functionality is the user uploads a video from their mobile device and uploads it to s3. A message from SNS is sent to my Heroku server that the upload is completed. The server kicks off a celery task that downloads the video from s3 and uses a machine learning model to do some natural language processing, then saves the results to my postresql database. Obviously this is very compute intensive, so I've run into some memory issues and can for-see scaling issues to come.
After lots of tweaking and attempts to no avail, I've decided to move over to AWS and leverage some of the cost benefits that I've seen in comparison to heroku of running more memory intensive tasks.
I should also mention there is a web interface involved with this django project and it isn't just a REST Api.
As far as AWS goes, I'm looking for a bit of direction. Possibly just a rough outline of the architecture I should look deeper into.
My first plan is to dockerize my application and go from there...but I'm a bit stuck on how my application fits (website, rest api, worker threads) into the AWS ecosystem.
AWS is a great fit for the application you describe. AWS Fargate / RDS will host your Django application. You have the option of using AWS Batch to handle your processing. One huge advantage is the ability to scale according to the needs of your application.
This image is one possible way to structure your application. It's a lot of work to get to this point, but AWS offers a lot of power and flexibility for reasonable costs IMO.
I have Django server deployed on Google App Engine, I am doing simple GET request which is taking around 2 seconds, while the same request takes around 300ms when run locally. Both servers are using the same mysql database on Google Cloud SQL. I am testing this on my home wifi (100mbps), so don't think it's a network issue, anyway the payload is pretty small (2.5kb).
Anyone seen this slowness when deployed to Google App Engine? Is there any configuration change I could make to Google App Engine, that would make it faster?
Any suggestions are welcome.
Thanks!
When comparing Google App Engine’s performance with the local one you should keep in mind that deploying on GAE needs more time in order to import all the necessary libraries and set up the Django framework.
Here , it is stated that Instance Startup Time for Standard Environment is up to seconds and for Flexible up to minutes. Additionally, I found some StackOverflow posts that shed some light on this here and here.
You may profile your application by using Cloud Trace to analyze the requests and isolate what causes the issue so that you may improve it afterwards.
In addition to that there are various ways to optimize your application’s performance, as the following typical ones:
Scaling configuration, by setting up “min_idle_instances” to be kept running and ready to serve traffic.
using Warm Up Requests to reduce request and response latency during the time when your app's code is being loaded to a newly created instance.
Furthermore, here and here you may find the official Running Django on the App Engine environments tutorials so that you can spot any details you may have missed.
Finally, during my investigation, I came across PageSpeed Insights, which analyzes the content of a web page, then generates suggestions to make that page faster and could be handy.
I hope this information is helpful and point you in the right direction.
I have worked mostly with monolithic web applications over the years and have not spent very much time looking at Django high-availability scaling solutions.
There are plenty of NGINX/Postgres/Django/Docker builds on hub.docker.com but not one that I could find that would be a good starting point for a scalable Django web application.
Ideally the Docker project would include:
A web container configuration that easily allows adding new web containers to web app
A database container configuration that easily allows adding new database shards
Django microservice examples would also be a plus
Unfortunately, I do not have very much experience with Kubernetes and Docker swarms.
If there are none, I may make this my next github.com project.
Thank you in advance.
That is a very broad request. It depends on many factors:
What is the performance requirement?
Make sure, you really need that kind of scaling. In most cases, a single good server with docker-compose can handle your needs. If you want to reduce risk of downtime, create 2 servers, add a load balancer in between them and extract shared services like DB or Caching into another Service and have both server instances use them.
How do you want to run your applications?
Docker-Compose:
Checkout https://github.com/pydanny/cookiecutter-django on how to do it with docker-compose on a single server.
Kubernetes:
Django is quite good to run in Kubernetes, I am running several applications on Kubernetes. The issue with that is, you need to setup all other services properly (redis, DB, ElasticSearch, etc.). Each of those services run independently of your main Django app and need to be connected through libraries like haystack or python-redis.
Heroku & others
There are also providers like Heroku that offer a lot the auto scaling for a price. If price is less of a concern, go with these solutions because they are very easy to setup and maintain.
How scalable should your DB be?
If you are trying to setup your own DB cluster across different servers/regions, you will spend a lot of time building and maintaining it. I would recommend, use DB services like AWS RDS. They do this for you with easy setup and maintenance. You can configure how scalable it should be. It costs some money, but it's the least effort solution.
I have a web app running on php, mysql, apache on a virtual windows server. I want to redesign it so it is scalable (for fun so I can learn new things) on AWS.
I can see how to setup an EC2 and dump it all in there but I want to make it scalable and take advantage of all the cool features on AWS.
I've tried googling but just can't find a simple guide (note - I have no command line experience of Linux)
Can anyone direct me to detailed resources that can lead me through the steps and teach me? Or alternatively, summarise the steps in an answer so I can research based on what you say.
Thanks
AWS is growing and changing all the time, so there aren't a lot of books to help. Amazon offers training that's excellent. I took their three day class on Architecting with AWS that seems to be just what you're looking for.
Of course, not everyone can afford to spend the travel time and money to attend a class. The AWS re:Invent conference in November 2012 had a lot of sessions related to what you want, and most (maybe all) of the sessions have videos available online for free. Building Web Scale Applications With AWS is probably relevant (slides and video available), as is Dissecting an Internet-Scale Application (slides and video available).
A great way to understand these options better is by fiddling with your existing application on AWS. It will be easy to just move it to an EC2 instance in AWS, then start taking more advantage of what's available. The first thing I'd do is get rid of the MySql server on your own machine and use one offered with RDS. Once that's stable, create one or more read replicas in RDS, and change your application to read from them for most operations, reading from the main (writable) database only when you need completely current results.
Does your application keep any data on the web server, other than in the database? If so, get rid of all local storage by moving that data off the EC2 instance. Some of it might go to the database, some (like big files) might be suitable for S3. DynamoDB is a good place for things like session data.
All of the above reduces the load on the web server to just your application code, which helps with scalability. And now that you keep no state on the web server, you can use ELB and Auto-scaling to automatically run multiple web servers (and even automatically launch more as needed) to handle greater load.
Does the application have any long running, intensive operations that you now perform on demand from a web request? Consider not performing the operation when asked, but instead queueing the request using SQS, and just telling the user you'll get to it. Now have long running processes (or cron jobs or scheduled tasks) check the queue regularly, run the requested operation, and email the result (using SES) back to the user. To really scale up, you can move those jobs off your web server to dedicated machines, and again use auto-scaling if needed.
Do you need bigger machines, or perhaps can live with smaller ones? CloudWatch metrics can show you how much IO, memory, and CPU are used over time. You can use provisioned IOPS with EC2 or RDS instances to improve performance (at a cost) as needed, and use difference size instances for more memory or CPU.
All this AWS setup and configuration can be done with the AWS web console, or command-line tools, or SDKs available in many languages (Python's boto library is great). After learning the basics, look into CloudFormation to automate it better (I've written a couple of posts about that so far).
That's a bit of the 10,000 foot high view of one approach. You'll need to discover the details of each AWS service when you try to use them. AWS has good documentation about all of them.
Depending on how you look at it, this is more of a comment than it is an answer, but it was too long to write as a comment.
What you're asking for really can't be answered on SO--it's a huge, complex question. You're basically asking is "How to I design a highly-scalable, durable application that can be deployed on a cloud-based platform?" The answer depends largely on:
The specifics of your application--what does it do and how does it work?
Your tolerance for downtime balanced against your budget
Your present development and deployment workflow
The resources/skill sets you have on-staff to support the application
What your launch time frame looks like.
I run a software consulting company that specializes in consulting on Amazon Web Services architecture. About 80% of our business is investigating and answering these questions for our clients. It's a multi-week long project each time.
However, to get you pointed in the right direction, I'd recommend that you look at Elastic Beanstalk. It's a PaaS-like service that abstracts away the underlying AWS resources, making AWS easier to use for developers who don't have a lot of sysadmin experience. Think of it as "training wheels" for designing an autoscaling application on AWS.
Right now the website is running locally and I'm still working on it.
While doing this I also have to make it visible to a specific group of users as I need their feedback in order to add/change features, etc.
I've tried to find a free web hosting without any luck (see dependencies).
I was thinking to create a VPN but then I will have to use my PC as a host for a virtual machine which is by far not what I'm looking for.
Therefore, my questions are:
1. Which is the best way to achieve this (website visibility for TESTING) fast and easy?
2. If a dedicated web host is the best solution, please point me to an easy-to-use and cheap one. What I've tried so far: elastichosts, alwasydata, stackable, 1FreeHosting and probably others I don't remember right now. For a reason or another I couldn't use none of the above.
Another aspect to be considered: I want this only for simple testing and I don't need a lot of server resources. Also the traffic will be very low as there are only 5 testers. That's why I wouldn't pay too much for it. I will probably need this temporary web hosting for 2-3 months.
Dependencies:
- as the website uses mezzanine, for the moment I only need mezzanine's dependencies.
Thanks in advance!
You can always just setup port forwarding on your router. This would allow your testers direct access to your app. Though this might give your PC more exposure than you want.
Heroku has a free tier.
In your non free options, an instance at linode costs $20/month, but requires some setup. Rackspace has similar options in their cloud servers line. Both are no contract servers.
My blogpost covers gracefully deploying a Mezzanine site. The monthly hosting cost is nothing compared to the cost of a slow, painful deployment process.
An EC2 micro-instance right now costs as little as ~US$3.50/month. I create and destroy staging servers on EC2 servers for testing and sharing with others.