I have a django web app that uses postgres db.It allows users to login and make some posts which get saved to db and later on the user can list how many posts he made on a particular day etc and list the posts belonging to a particular category etc.While this worked without any delay in my machine,it is taking a lot of time to load each page when hosted on a free host.
How do you find out why this is happening?which part of the app should I look first?Is there any meaning in using a profiler since this app used to run with no delays in my local machine?
I would like to find out how to approach this problem in general.I was able to access other apps hosted on the same free host without much delays ..so this may be a problem specific to my app
I would like some advice regarding this..If anyone can help..
thank you
p.s:(I intentionally left out the host's name because ,since that was a free service ,there was no point in complaining and also other apps on the same host works well)
The here is the free host bit, when on a free host you could be sharing a box with hundreds of other sites (that can equate to a very small amount of ram or CPU). Pay a little money, ($30 dollars / £22 a year) and get your self a better host.
You will find the performance and reliability so much better.
Failing that I would firstly find out what the latency between you and the server is, on a local machine there is no / little network traffic so your pages will appear to load a lot faster.
Next i would look at the actual download speeds you are getting. It could be that your site is limited to 20-30k, which means even a small site will take over a second to load.
Are you hosting many images? If this is the case are you serving them through django or is the webserver doing this. If it is django then make the webserver take this load.
Finally check the processing speed of the pages. Analise the queries which are being run and find out what is taking the time. Make sure that postgres is correctly configured and has enough resources. You can analyses the query speed using the django debug toolbar.
Related
This is my first attempt at looking into cloud hosting and I'm feeling like a complete idiot. I have always had my own dedicated server with which I would would remote in and install/manage everything myself. So this cloud thing is completely new for me. I just can't seem to grasp basic things... like how I would get Tomcat and PostgreSQL installed in a way that they could talk to each other or get my domain and SSL cert on there, etc.
If I could just get a feel for where I should start, then I could probably calculate my costs and jump into the free trial where hopefully things will click for me.
Here are my basic, high-level requirements...
My web app running in Tomcat over HTTPS
Let's say approximately 1,000 page views per day
PostgreSQL supporting my web app.
Let's say approximately 10GB database storage
Throughout the day, a fairly steady stream of inbound SFTP data (~ 100MB per day)
The processing load on the app server side should be fairly light. The heaving lifting will be on the DB side sorting through and processing lots of data.
I'm having trouble figuring out which options I would install and calculating costs. If someone could help me get started by saying something like "You would start with a std-xyz-med server, install ABC located here at http://blahblah, then install XYZ located at http://XYZ.... etc.. etc. You can expect to pay somewhere around $100-$200 per month"....
Thoughts?
I would be eternally grateful. It seems like they should have some free sales support channel to ask someone at Google about this, but I don't see it.
Thank You!
I'll try to give you some tips where to start looking.
I will be referring to some products, here are the links
If you want to stick to your old ways, you can always spin up an instance on Compute Engine and set it up the same way you did before, these are just regular virtual machines. For some use cases this is completely valid.
You can split different components of your stack to different products:
For example, if your app is fine with postgresql, you can spin up a fully managed service in Cloud SQL, which might make it easier to manage backup or have several apps access the same db.
Alternatively, have a look at the different DB offerings to see if any of them matches your needed workload better. Perhaps have a look at BigQuery?
If you want to turn your app into a microservice, which is then easier to autoscale and is more fault tolerant, have a look at App Engine. That way you don't need to manage a virtual machine. The docs here will lead you through some easy to follow examples on how to set up SSL.
For the services to talk to each other, refer to docs of the individual components. It's usually very simple.
With pricing, try https://cloud.google.com/products/calculator/
Things like BigQuery have different pricing models - you don't pay for server uptime, but for amounts of data stored & processed with your queries.
I was contracted to make a groupon-clone website for my client. It was done in PHP with MYSQL and I plan to host it on an Amazon EC2 server. My client warned me that he will be email blasting to about 10k customers so my site needs to be able to handle that surge of clicks from those emails. I have two questions:
1) Which Amazon server instance should I choose? Right now I am on a Small instance, I wonder if I should upgrade it to a Large instance for the week of the email blast?
2) What are the configurations that need to be set for a LAMP server. For example, does Amazon server, Apache, PHP, or MySQL have a maximum-connections limit that I should adjust?
Thanks
Technically, putting the static pages, the PHP and the DB on the same instance isn't the best route to take if you want a highly scalable system. That said, if the budget is low and high availablity isn't a problem then you may get away with it in practise.
One option, as you say, is to re-launch the server on a larger instance size for the period you expect heavy traffic. Often this works well enough. You problem is that you don't know the exact model of the traffic that will come. You will get a certain percentage who are at their computers when it arrives and they go straight to the site. The rest will trickle in over time. Having your client send the email whilst the majority of the users are in bed, would help you somewhat, if that's possible, by avoiding the surge.
If we take the case of, say, 2,000 users hitting your site in 10 minutes, I doubt a site that hasn't been optimised would cope, there's very likely to be a silly bottleneck in there. The DB is often the problem, a good sized in-memory cache often helps.
This all said, there are a number of architectural design and features provided by the likes of Amazon and GAE, that enable you, with a correctly designed back-end, to have to worry very little about scalability, it is handled for you on the most part.
If you split the database away from the web server, you would be able to put the web server instances behind an elastic load balancer and have that scale instances by demand. There also exist standard patterns for scaling databases, though there isn't any particular feature to help you with that, apart from database instances.
You might want to try Amazon mechanical turk, which basically lots of people who'll perform often trivial tasks (like navigate to a web page click on this, etc) for a usually very small fee. It's not a bad way to simulate real traffic.
That said, you'd probably have to repeat this several times, so you're better off with a load testing tool. And remember, you can't load testing a time-slicing instance with another time-slicing instance...
Is there an easy way to migrate a hosted LAMP site to Amazon Web Services? I have hobby sites and sites for family members where we're spending far too much per month compared to what we would be paying on AWS.
Typical el cheapo example of what I'd like to move over to AWS:
GoDaddy domain
site hosted at 1&1 or MochaHost
a handful of PHP files within a certain directory structure
a small MySQL database
.htaccess file for URL rewriting and the like
The tutorials I've found online necessitate PuTTY, Linux commands, etc. While these aren't the most cumbersome hurdles imaginable, it seems overly complicated. What's the easiest way to do this?
The ideal solution would be something like what you do to set up a web host: point GoDaddy to it, upload files, import database, done. (Bonus points for phpMyAdmin being already installed but certainly not necessary.)
It would seem the amazon AWS marketplace has now got a solution for your problem :
https://aws.amazon.com/marketplace/pp/B0078UIFF2/ref=gtw_msl_title/182-2227858-3810327?ie=UTF8&pf_rd_r=1RMV12H8SJEKSDPC569Y&pf_rd_m=A33KC2ESLMUT5Y&pf_rd_t=101&pf_rd_i=awsmp-gateway-1&pf_rd_p=1362852262&pf_rd_s=right-3
Or from their own site
http://www.turnkeylinux.org/lampstack
A full LAMP stack including PHPMyAdmin with no setup required.
As for your site and database migration itself (which should require no more than file copies and a database backup/restore) the only way to make this less cumbersome is to have someone else do it for you...
Dinah,
As a Web Development company I've experienced an unreal number of hosting companies. I've also been very closely involved with investigating cloud hosting solutions for sites in the LAMP and Windows stacks.
You've quoted GoDaddy, 1And1 and Mochahost for micro-sized Linux sites so I'm guessing you're using a benchmark of $2 - $4 per month, per site. It sounds like you have a "few" sites (5ish?) and need at least one database.
I've yet to see any tool that will move more than the most basic (i.e. file only, no db) websites into Cloud hosting. As most people are suggesting, there isn't much you can do to avoid the initial environment setup. (You should factor your time in too. If you spend 10 hours doing this, you could bill clients 10 x $hourly-rate and have just bought the hosting for your friends and family.)
When you look at AWS (or anyone) remember these things:
Compute cycles is only where it starts. When you buy hosting from traditional ISPs they are selling you cycles, disk space AND database hosting. Their default levels for allowed cycles, database size and traffic is also typically much higher before you are stopped or charged for "overage", or over-usage.
Factor in the cost of your 1 database, and consider how likely it will be that you need more. The database hosting charges can increase Cloud costs very quickly.
While you are likely going to need few CCs (compute cycles) for your basic sites, the free tier hosting maximums are still pretty low. Anticipate breaking past the free hosting and being charged monthly.
Disk space it also billed. Factor in your costs of CCs, DB and HDD by using their pricing estimator: http://calculator.s3.amazonaws.com/calc5.html
If your friends and family want to have access to the system they won't get it unless you use a hosting company that allows "white labeling" and provides a way to split your main account into smaller mini-hosting accounts. They can even be setup to give self-admin and direct billing options if you went with a host like www.rackspace.com. The problem is you don't sound like you want to bill anyone and their minimum account is likely way too big for your needs.
Remember that GoDaddy (and others) frequently give away a year of hosting with even simple domain registrations. Before I got my own servers I used to take HUGE advantage of these. I've probably been given like 40+ free hosting accounts, etc. in my lifetime as a client. (I still register a ton of domain through them. I also resell their hosting.)
If you aren't already, consider the use of CMS systems that support portaling (one instance, many websites under different domains). While I personally prefer DotNetNuke I'm sure that one of its LAMP stack competitors can do the same for you. This will keep you using only one database and simplify your needs further.
I hope this helps you make a well educated choice. I think it'll be a fine-line between benefits and costs. Only knowing the exact size of every site, every database and the typical traffic would allow this to be determined in advance. Database count and traffic will be your main "enemies". Optimize files to reduce disk-space needs AND your traffic levels in terms of data transferred.
Best of luck.
Actually it depends upon your server architecture, whether you want to migrate whole of your LAMP stack to Amazon EC2.
Or use different Amazon web services for different server components like Amazon S3 for storage and Amazon RDS for mysql database and so.
In case if you are going with LAMP on EC2: This tutorial will atleast give you a head up.
Anyways you still have to go with essential steps of setting up the AMI and installing LAMP through SSH.
I'm trying to understand how many EC2 servers I should start.
I understand the point of AWS is to be able to scale up quickly, but just for cost estimates, how many (approximately) micro ec2 nodes would be needed to run a simple php web app?
Just for the sake of estimating, assume the app is loading CodeIgniter and serving a static page without any database access.
Any ideas?
It completely depends on the type of site that you have. If it is static web-pages then one server with caching should be fine. Even dynamic pages should be fine if you do caching in the right places.
Depending on how much traffic you get on the sites you can get several hundred to several thousand hits per minute. An EC2 instance should be able to just about manage that (for a mostly static web-page).
I would recommend you not worrying about it. Any spike will at the most happen for a day. If you need to budget, plan for a 100 computers for one day. If you really need all of them, then you have a few hours to build a simple static e-mail collection page and redirect most of your traffic there.
I'm planing to deploy a django powered site. But I feel confused about the choice of web servers, which includes apache, lighttpd, nginx and others.
I've read some articles about the performance of each of these choice. But it seems no one agrees. So I'm wondering why not test the performance by myself?
I can't find information about the best approach to performance testing web servers. So my questions are:
Is there any easy approach to test the performance without the production site?
Or can I have a method to simulate the heavy traffic to have a fair test?
How can I keep my test fair and close to production situation?
After the test, I want to figure out:
Why some ones say nginx has a better performance when serving static files.
The cpu and memory needs of each web server.
My best choice.
Tools like ab are commonly used towards testing how much load you can take from a battering of requests at once, alongside cacti/munin/your system monitoring tool or choice you can generate data on system load & requests/sec. The problem with this is many people benchmarking don't realise that they need to request a lot of different requests, as different parts of your code executes it will take varying amounts of time. Profiling and benchmarking code and not requests is also important, to which plenty of folk have already done so for django, benchrun is also not a bad tool either.
The other issue, is how many HTTP requests each page view takes. The less amount of requests, and the quicker they can be processed is the key to having websites that can sustain a high amount of traffic, as the quicker you can finish and close connections, the quicker you allocate resources for new ones.
In terms of general speed of web servers, it goes without saying that a proxy server (running reverse at your end) will always perform faster than a webserver with static content. As for Apache vs nginx in regards to your django app, it seems that mod_python is indeed faster than nginx/lighty + FastCGI but that's no surprise because CGI, regardless of any speed ups is still slow. Executing and caching code at the webserver and letting it manage it is always faster (mod_perl vs use CGI, mod_php vs CGI, etc) if you do it right.
Apache JMeter is an excellent tool for stress-testing web applications. It can be used with any web server, not just Apache.
You need to set up the web server + website of your choice on a machine somewhere, preferably a physical machine with similar hardware specs to the one you will eventually be deploying to.
You then need to use a load testing framework, for example The Grinder (free), to simulate many users using your site at the same time.
The load testing framework should be on separate machine(s) and you should monitor the network and CPU usage of those machines as well to make sure that the limiting factor of your testing is in fact the web server and not your load injectors.
Other than that its just about altering the content and monitoring response times, throughput, memory and CPU use etc... to see how they change depending on what web server you use and what sort of content you are hosting.