I'm at a loss here.
I am attempting to transfer a Django application to EC2. I ave moved the DB to RDS(Postgres image) and have static and media on S3.
However for some reason, all my pages are taking 25-30 seconds to load. I have checked the images and CPU and memory barely blips. I checked and took off KeepAlive in Apache, and changed the WSGI to work in daemon mode, but none of this made any difference. I have gone into the shell on the machine and accessed the DB and that appears to be reacting fine as well. I ahve also increased the EC2 image, with no effect.
S3 items are also being delivered quickly and without issue. Only the rendering of the html is taking long times.
On our current live and test server, there are no issues with the pages which load in ms
Can anyone point me to where or what I should be looking at?
Marc
The issue appeared to be connected with using RDS. I installed Postgres on the EC2 image and appart from a little mucking around it worked fine on there.
I'm going to try building a new RDS, but that was the issue here. Strange it worked ok directly via manage.py shell
Related
I have a EC2 instance that I use as a WordPress server, and over the weekend, the db got corrupted. So today I had no choice but to recover it from a snapshot taken before the weekend. The recovery worked, and everything seems to be working fine except for one big thing, the images are not showing up. If I go to the website, the images are gone, and when I log in to the admin, the images are there, but grey.
link to image: https://imgur.com/a/TPV1HXZ
I followed the official guide for restoring, and everything else is working. Any ideas as to what I can do?
I am planning to migrated from DigitalOcean(DO) to Google Cloud(GCP).
I have taken trial on GCP and hosted Django website on it, but it is running too slow (it take 12-15 seconds to open page). Same website running on DO is very fast(it take hardly 1 second to open page).
Website is hosted on Ubuntu 20 LTS (on DO & GCP) & Apache2 server
On GCP there is no user right now, for testing I am only one user and it is running too slow. I have 2CPU and 8GB memory on VM.
I am not able to find out the issue why it is running slow on GCP and running fast on DO?
Can someone please help to find out solution?
When comparing Google Cloud Platform performance with the local one you should keep in mind that deploying on GCP needs more time to import all the necessary libraries and set up the Django framework.
In general it doesn't make much sense to compare the performance on your local machine with the performance on GCE, as local machines are likely running a different OS than GCE
In addition to that there are various ways to optimize your application’s performance, as the following typical ones:
Scaling configuration, by setting up “min_idle_instances” to be kept running and ready to serve traffic.
Using Warm Up Requests to reduce request and response latency during when your app's code is being loaded to a newly created instance.
I came across PageSpeed Insights, which analyzes the content of a web page, then generates suggestions to make that page faster and could be handy
I have an Elasticsearch cluster on AWS ec2 instance. It is t3.small with 2 vcores and 2 GB ram. I have installed Elasticsearch & kibana. For extensions, I have installed Heartbeat and Metricbeat. The database I'm working with is mongo DB and all my data is no-SQL. I feed my engine from my MongoDB cluster which is in my local machine with a script. I feed my engine and run the queries from my app and also from the console. So far so good. Everything is fine. Well, the cluster is always yellow it is not green.
The problem starts after hitting multiple requests on the engine. After 50 or 60 search queries the data just disappears. Well somehow my engine is forcefully dumping my indices and it's not being able to restore those data ( obviously I have no snapshot and no restore point ) and I keep getting lose those data. I have to manually feed the engine again and again. Well at first I had 1 GB ram so I thought upgrading would fix the issue but after upgrading to 2 GB ram it didn't stop. Well, now the data stays there for some more time.
So here are my DB configs.
I have 70K + no SQL documents.
Which contains text and geo_point types
I make post request on my engine through my front end application.
I don't have logstash installed, but metricbeat is not showing any error logs.
All my elastic search engine setup is for Testing purposes this is not the production mode.
We will upgrade when we go to the production mode.
So I need to know
What is the reason behind this and
how to prevent this huge data loss
So please help me or just suggest how to solve this huge problem.
Thank you
Ideally first thing you should do is to make cluster green.
To see the exact elasticsearch error that is causing this situation, you should look at elasticsearch.log file. It will contain exact error causing it.
One way to keep cluster data safe is to take regular snapshots and restore incase of data loss. Details of snapshots procedure can be found here.
Since I updated my production setup to Wagtail 1.11 I cannot load the admin page for images. Visiting /admin/images/ results in a 502 error. In my development setup I don't have the same problem
This is the result of a crash of the runner. The memory and CPU usage of the runners gets too high for the server to handle at which point they are killed. (Seen in top and restarts are shown in the logs)
This seems to be the same as https://github.com/wagtail/wagtail/issues/3575, but Wand is not used and no GIF images are uploaded to the system so this is not the cause. The following seemingly relevant python packages are used:
Django==1.11.3
gunicorn==19.7.1
Pillow==4.2.1
wagtail==1.11.1
Willow==0.4
The project is running on a fully updated Ubuntu 16.04 machine.
Does anyone have a suggestion of what can fix this bug?
Try removing some of the more recent or larger images and reloading the page. The problem could be the result of a corrupt or malicious image.
The easiest way to diagnose if this is the problem is to:
Move all images from the media/original_images folder to a backup folder.
Access the /admin/images page. If this was the problem the page should now load again.
Note all images that now do not have a thumbnail; these are the pictures crashing the application.
Move all picture except for the ones noted back into the media/original_images folder.
Except for the picture crashing your system, everything should now work similar to what it did before.
This is not first time when I see this infinite loading after copying my OpenCart shop - but if I look to source of page in browser - I see the code and loaded resources at Network tab at Chrome. After maybe 10 minutes of loading sometimes I can see my site that looks like without CSS.
My steps: creating new VM at Google Cloud, installation of mysql (+ creation user & "grant all"), installation of apache2, phpmyadmin, copy files from old hosting and at config.php change internal path.
At /var/log/apache2/error.log no one error. Apache and MySQL is working. VM is fast enough and CPU with RAM not loaded too much. Restart of VM does not help.
Where can be the problem?
Often, this is caused by a typo in the config file for the HTTP_SERVER. Make sure it's formatted as http://www.domain.co.uk/ , anything else could cause the loop.