Django website running too slow on Google Cloud Platform(GCP) - django

I am planning to migrated from DigitalOcean(DO) to Google Cloud(GCP).
I have taken trial on GCP and hosted Django website on it, but it is running too slow (it take 12-15 seconds to open page). Same website running on DO is very fast(it take hardly 1 second to open page).
Website is hosted on Ubuntu 20 LTS (on DO & GCP) & Apache2 server
On GCP there is no user right now, for testing I am only one user and it is running too slow. I have 2CPU and 8GB memory on VM.
I am not able to find out the issue why it is running slow on GCP and running fast on DO?
Can someone please help to find out solution?

When comparing Google Cloud Platform performance with the local one you should keep in mind that deploying on GCP needs more time to import all the necessary libraries and set up the Django framework.
In general it doesn't make much sense to compare the performance on your local machine with the performance on GCE, as local machines are likely running a different OS than GCE
In addition to that there are various ways to optimize your application’s performance, as the following typical ones:
Scaling configuration, by setting up “min_idle_instances” to be kept running and ready to serve traffic.
Using Warm Up Requests to reduce request and response latency during when your app's code is being loaded to a newly created instance.
I came across PageSpeed Insights, which analyzes the content of a web page, then generates suggestions to make that page faster and could be handy

Related

how to migrate virtual machine scale set in windows azure (asp.net)

I'm working on web app and i want to migrate this web app to virtual machine scale set in windows azure cloud,i'm new to cloud computing ,till i didn't got any proper tutorial about virtual machine scale set,please someone help with this
A few things to consider..
You could build a custom VM which contains the complete app, or you could use VM extensions to deploy the app on a platform image each time a new VM in the scale set is deployed. See: https://msftstack.wordpress.com/2016/04/20/deploying-applications-in-azure-vm-scale-sets/ for some thoughts on this. Ultimately it might depend on how much you need to install over a base image, and how fast you want scaling to be.
Do you need autoscale based on resource usage or do you plan to manually increase/decrease the number of VMs in the set? See https://azure.microsoft.com/en-us/documentation/articles/virtual-machine-scale-sets-windows-autoscale/
A good way to get started with scale sets is to deploy an existing template directly from Azure Quick start templates. Look at https://github.com/Azure/azure-quickstart-templates and search for vmss. These templates will give you an idea of some of the options you have.
To learn the basics about VM Scale Sets, start with the documentation page: https://azure.microsoft.com/documentation/services/virtual-machine-scale-sets/ and the GA announcement: https://azure.microsoft.com/en-us/blog/azure-virtual-machine-scale-sets-ga/
Also look at higher level services like the Azure Web App service if you haven't already, the advantage of a higher level service is that some of the basic web app operations get taken care of for you: https://azure.microsoft.com/en-us/services/app-service/web/

What is the difference between Bitnami and Click to deploy on GCE?

Just trying to understand what's the difference between bitnami apps and google 'click-to-deploy' options on Google Cloud Engine?
For example: There is a 'Cassandra' click-to-deploy and there is a Bitnami version of 'Cassandra'
Can anyone tell me how do they compare and what are the differences?
- is one restrictive compared to the other?
- does bitnami version lock you in somehow?
- is there any performance difference (other than obvious performance difference that the hardware change would bring)
Thanks.
Bitnami makes application stacks that run on several cloud platforms including Google Cloud Platform, AWS, Azure and a few others. The Bitnami images you see on Google Cloud Launcher are created by employees of Bitnami and are mostly standard across cloud.
Click to Deploy images are usually created by Google Cloud Platform employees working in conjunction with application vendors.
There are differences in versions here and there related to maintenance, but there isn't any difference in the way they are intended to be used. Some Click to Deploy images will incur higher use charges due to licensing (ie. the Click to Deploy image contains the "Pro" version of a vendor's software), but these are called out during the selection process.
Neither version is intended to lock you into a particular platform, Google or Bitnami, it's just that there is duplication among the applications provided.

Deploying first Django project on Amazon EC2 free tier

Finally the time has come and I'm ready to deploy my first django project.
I'm a newbie in web development stuff and now the real fun begins.
This is a low scale site for computer jobs.
I want to start with a free tier and grow from there as need emerges.
I've read some guides regarding django project deployment but could not find all answers.
so hope some guys here could help me out:
I've been thinking on getting Amazon EC2 free tier VPS, is this a good option?
my local development machine runs Ubuntu, I've read that i could install 10GB Ubuntu image, do you recommend such image?
should I go with apache or lighter web server?
My project is hosted on bitbucket, I just need to checkout my project on my VPS right?
What about data backups? I would like to backup my mySQL DB
How do you recommend me serving the static files?
I'm looking for a good tutorial on how to setup AWS with django and mysql
10x guys!
I've been thinking on getting Amazon EC2 free tier VPS, is this a good
option?
If it fufills your technology requirements, ram, cpu, memory; it is a good option.
my local development machine runs Ubuntu, I've read that i could
install 10GB Ubuntu image, do you recommend such image?
Might as well keep your environments the same if you can. If you can match up versions that is another plus
should I go with apache or lighter web server?
Either, Apache would probably be easier to deploy at this point because you don't have to worry about running it as a servicer ( using a program like like supervisor to manage it).
Whichever one you choose, there is an abundance of tutorials online describing how to set up django.
My project is hosted on bitbucket, I just need to checkout my project
on my VPS right?
That is one way. There are lots of ways to deploy. I like syncing the actual files using fabric. That way your production server doesn't need to know about your bitbucket account. Once again, there are so many tutorials online describing deploying django. Fabric is a great place to start.
What about data backups? I would like to backup my mySQL DB
There exists lots of tools for this. Plenty of premade tools and shell scripts. I have used automysqlbackup and it works great http://sourceforge.net/projects/automysqlbackup/
How do you recommend me serving the static files?
Make sure the webserver serves them. If you deploy through apache you can set up an alias to serve static files very easily. You can come up with a collectstatic deployment scheme to put your static on s3, but for a simple site apache would be just fine
I'm looking for a good tutorial on how to setup AWS with django and
mysql
Perhaps you can find a tutorial that covers this, most likely you might just find a tutorial :
how to setut aws with ubuntu
Installing django / mysql on ubuntu

Deploying WordPress on Elastic Beanstalk?

Suppose I create a site in Wordpress, which is running on Elastic Beanstalk. Now, on the running app I will create posts /pages, upload images, etc. That is, some data, videos, files and records in a database will be added to the running application.
3 questions:
If WordPress is running on Elastic Beanstalk with multiple Amazon EC2 instances actually running my WordPress install, then will those files propagate automatically to all running instances? And will this also happen, if a new EC2 instance is fired up - for example, to handle increased load?
From what I see in AWS console, I can deploy different versions of an app-- but as per scenario above, if I deploy a new version, wont I lose all the files uploaded directly into running app (i.e. files and database records)? How do I keep those and at the same time deploy a new version of my app?
The WordPress team keeps issuing upgrades. Can I directly upgrade my running WordPress install, through the web interface? Or do I have to first upgrade my local version of WordPress, and then upload the new version of the app to Beanstalk? If I have to upgrade my local version and then upload, then again I am back to point 1, i.e. changes made by users directly to the older version of running app. How do I preserve those changes?
I've been working on this as well, and have learned a couple of things that are relevant here -- your question about uploads in particular has been on my mind:
(1) The best way to handle uploads, it seems to me, is to either go the NFS/NAS route like you suggest, but one better than that is to use an Amazon S3 plugin for WordPress, so that any uploads automatically copy up to S3 and the URLs in your WordPress media library reflect the FQDN of your bucket and not your specific site. That way you could have one or ten WP nodes in your Beanstalk and media/images are independent of any one of those servers.
(2) You should absolutely be using RDS here. Few things are easier to work with and as stress-free as a Multi-AZ, reserved MySQL RDS instance. Either that or your own EC2 running MySQL that is independent of the Beanstalk, but why run that when RDS is so much easier?
(3) Yes you definitely have to commit changes to your Git repository or local file first (new plugins, changes to themes, WP upgrades) and then upload/install as a revision to the Beanstalk code. Otherwise, all the changes you make via the web interface to one node will never be in the new load for a new node -- in fact you'll have an upgraded database but an older set of code in the Beanstalk application, so it's likely to create errors of some kind or another.
I took an AWS architecture course, and their advice for EC2 and the Beanstalk is to start to think about server instances as very disposable -- so you should try to think about easy ways for your boxes to provision themselves in the bootstrapping process and to take over work for one another without any precious resources on just one box. So losing an instance should never be a big deal. (This is definitely not how we thought in the world of physical servers, where we got everything tweaked 'just so'.)
Good luck!
Well, I'm no expert, but since no one has answered, I'll give it my best shot.
You are absolutely right--kind of. While each EC2 instance does have some local storage, it is destroyed and reset with each new instance. Because of this, Amazon has things like Elastic Block Storage and S3 for persistent files. I don't know how one would configure WP to utilize this, but that will likely be the solution.
I think this problem is solved by my answer to #1. As for the database, all of your EC2 instances should be pulling from the same RDS location. Again, while you could have MySQL running on each EC2 instance, in the interest of persistence, having a separate database makes more sense.
You, again, have most everything right. Local development should always precede live deployment. Upgrading locally then pushing to the live servers will make sure all of your instances remain identical.
Truth-be-told, I am still in the process of learning all of this too, as I said, I'm not an expert. Hopefully someone else will come along and give a more informed answer. However, the key conceptual hurdle here is the idea of elastic scalability--and the salient point of this idea is the separation of elements between what is elastic/scalable/disposable and what is persistent.
Hopefully that helps.
I have deployed a small Wordpress site on EB, S3 and RDS. S3 holds all static data, such as media uploads. This works through a plugin. RDS holds the database. EB holds the latest deployed application. The application is deployed from my dev environment, with a build script. This way, I just have to press one button and I redeploy.
I wrote an article about it here: http://www.cortexcode.com/wordpress-to-aws-code-example/
While it was at first annoying to work with, the speed of AWS is nice and now it's easier than ever. It used to be that I had to upload a bunch of files over FTP, this is way more efficient. :-)
As an addition to all the great answers already:
1) I can highly recommend EFS but also S3 for media files, so they are served from high availability regions in combination with cloudfront. For Wordpress there is one plugin that really speeds up this ( not affiliated to them, just really like the plugin ). There is also an assets plugin, if you'd like to serve JS, CSS files from S3. For the EFS solution, take a look at the AWSlabs docs on git, and specifically this file on how they mount the uploads file.
In general, EBS is really great for Wordpress, but you'll need to think in a different mindset as compared to other hosting solutions ( shared hosting, managed hosting ).
OK I researched a lot on this particular issue, and this is what I learned--
(1) If a wordpress user uploads some files, then his files will be uploaded only to the virtual machine that is actually serving his request at that time. Eg if currently the wordpress site is cloud-deployed and is using 5 virtual machines, now when user makes request he is directed to one virtual machine-- the one with the lowest load at that point... His uploads are stored only on that server. Current Platform-as-a-service solutions (like Amazon Elastic Beanstalk and App Fog) do not have the ability to propagate the changes to all the running instances. Either that (propagate changes to all servers) or use a common storage by all running instances-- these are the only 2 solutions to this problem. (Eg of common storage would be all 5 running virtual machines using Network-Attached-Storage (NAS)... )
(2) With ref to platforms available currently like Amazon Elastic Beanstalk and App Fog, for example, even if user made changes directly to running app- these platforms rely on the local version of code (which the admin deployed initially to cloud)- and there is no way to update the local version of code (on admin's PC) with the changes made by a user to running app-- hence these changes viz, files are lost-- Similarly, changes in database by user to running app are also lost-- unless the admin is using exactly the same database for his local app (that he deployed to cloud)
(3) Any changes to running apps first have to be made to the local app on admin's PC and then pushed to cloud.
I am working on a Cloud PaaS that addresses all these concerns-- viz updates can be made to running apps, code changes made to running app are also updated in code repository accessible by user...The Proof of concept is ready, hopefully it will be as good as I hope it should be :) -- currently the only thing that is actually there is the website (anyacloudpanel.com) -- design work is going on :)
If there is some rule that I should not mention my website( Anya Cloud Panel) -- then I am sorry -- pls feel free to edit and remove my website URL from my answer :)
Thanks,
Arvind.
Deploying WordPress to AWS Elastic Beanstalk does require some change to the normal WordPress deployment as mentioned here a few times. To answer your questions, here is a great tutorial explaining stateless applications and how to deploy to Elastic Beanstalk:
Deploying WordPress to Amazon Web Services AWS EC2 and RDS via ElasticBeanstalk
Be careful if you use a theme from themeforest for example. Some of them are incompatible with wordpress S3 plugin. Then you're screwed, you can not deploy your wordpress on the cloud.

How does cloud foundry handle process isolation?

Let's say that I setup my own cloud using the open source cloud foundry implementation provided on cloudfoundry.org. Will each app that I deploy be run as a separate user? Or is there any of VMWare's virtualization technology in use here? E.g. would each app run in a separate virtual machine or anything like that? How can I configure the memory, cpu, and disk resource limits for each app?
I asked this on the mailing list. Here's the response I got:
If your DEA is configured to run in secure mode, then each app runs as its own user and process isolation is used to protect them. We are moving toward a model of using linux cgroups http://en.wikipedia.org/wiki/Cgroups when on linux, using the warden cgroup wrappers that are already in our source tree.
VM based isolation for a single app is pretty heavy weight, but we have long term plans to provide this for apps that need/desire it. (As opposed to the warden/cgroup work which is a near term project)
Since this is related to the open source for cloud foundry, you can try asking your question on https://groups.google.com/a/cloudfoundry.org/group/vcap-dev
You should get a quick response there!