I've got 2 Python 3.6 pods currently running. They both used to run collectstatic upon redeployment, but then one wasn't working properly, so I deleted it and made a new 3.6 pod. Everything is working perfectly with it, except it no longer is running collectstatic on redeployment (so I'm doing it manually). Any thoughts on how I can get it running again?
I checked the documentation, and for the 3.11 version of openshift still looks like it has a variable to disable collectstatic (which i haven't done), but the 4.* versions don't seem to have it. Don't know if that has anything to do with it.
Edit:
So it turns out that I had also updated the django version to 2.2.7.
As it happens, the openshift infrastructure on openshift online is happy to collectstatic w/ version 2.1.15 of Django, but not 2.2.7 (or 2.2.9). I'm not quite sure why that is yet. Still looking in to it.
Currently Openshift Online's python 3.6 module doesn't support Django 2.2.7 or 2.2.9.
Related
I have been running a Django server on Python 3.8 with Apache and mod_wsgi for some time now, and decided it was time to upgrade to Python 3.10. I installed Python 3.10 on my server, installed the Django and mod_wsgi packages, copied the 3.10 build of mod_wsgi.so to Apache's modules folder, and everything works great ... however, it seems with this build of mod_wsgi, changes to template files do not take effect unless I restart Apache.
As an example, I added a random HTML template file to my website, and started the server with some initial text in that template. Running on the Python 3.8, I am able to change the contents of that template (e.g. echo "More Text" >> test_template.html) and upon refreshing my browser the new text will show up. However doing the same test in 3.10, the new text will not show up. I have tried different browser sessions and hard reloading, it is not a caching issue on the client side, and looking at the response sizes in Apache's access log confirms the data being sent to the client changes in 3.8 but not in 3.10.
I have stood up a test server to isolate the problem, and have narrowed it down to specifically changing the mod_wsgi build (which of course changes the entire Python version used by Django). Still, that confirms it should not be a caching setting of Apache, or any mis-configuration of Django templates, and I have followed the steps here to confirm I am running mod_wsgi in Daemon mode (as I have been for years on this server, this is a long-standing server configured seemingly without issue for Python 3.8).
Lastly, running the Django development server (using base manage.py runserver command) reflects template changes on the fly without issue, and without a server reboot. So as far as I can tell this seems to be a mod_wsgi quirk.
The specific Apache | mod_wsgi | Python version combinations is as follows:
Apache/2.4.41 (Ubuntu) mod_wsgi/4.6.8 Python/3.8
Apache/2.4.41 (Ubuntu) mod_wsgi/4.9.4 Python/3.10
...as reported by Apache's error.log, confirming the modules are loading as expected.
Does anyone know if this is a known issue with Python 3.10 builds of mod_wsgi? Perhaps is there a new setting I'm forgetting? My understanding of Django templates is that they should always reflect changes immediately (without a server restart), however code changes require a restart (or touching of the wsgi.py script); I have never had to restart the server for template changes prior to this change. Any help is appreciated-
Edit: Just tried upgrading my Python 3.8 version of mod_wsgi to the same version (4.9.4), and it still works fine, so there is something about Python 3.10 vs 3.8, or another installed python package. I will keep testing...
I ended up making a post under the mod_wsgi github project, and traced the issue back to a change in Django's behavior in this commit
Full details of that post can be found here
Tl;dr; is there is a caching template loader, which used to only be enabled when DEBUG = False was set, but was updated to always be in effect. I'm not sure why my Python 3.8 build did not have this change, as I had upgraded both builds to the latest Django build available (4.6.1), but my original install was years ago so it's quite possible a fresh install would not have had this issue.
If you still want to disable cached templates as I did, you have to override the template loaders in your Django settings:
TEMPLATES = [
{
'OPTIONS': {
'loaders': ['django.template.loaders.filesystem.Loader', 'django.template.loaders.app_directories.Loader']
},
},
]
I'm a noob trying to learn Django for the first time, I created a project in a virtualenv on Windows 10. It worked well in the beginning where I was able to login to the admin section after running '''python manage.py runserver'''
But now when I run the same command I'm able to see the Django landing page but as soon as I try to hit http://localhost:8000/admin/ or http://127.0.0.1:8000/admin the server automatically disconnects and I get the "This site can’t be reached" error on Chrome.
I tried changing the port number by running python manage.py runserver 0.0.0.0:8001 but it didn't work. I tried to check if the port (8000) is currently in use by running the cmd (as an admin) netstat -a -b but couldn't find any issues.
The server just quits without any error message
Edit: Currently using Python 3.7.0 and django-3.0.1
There's a ticket about this issue: https://code.djangoproject.com/ticket/31067.
This seems to be a bug in Python 3.7.0, and appears to be fixed in Python 3.7.1. It's still unknown what the exact trigger is for this bug.
Since Django officially only supports the latest patch release of a Python series, this won't be fixed in Django. You can either upgrade your Python version to the latest patch release of 3.7, or downgrade Django to 2.2.
It's a Django 3.0 issue as I've seen. There are so many issues on GitHub, regarding this error.
You may try downgrading to Django 2.* versions for now. Version 2.1/2.2 works fine.
I'm learning to deploy Django on Openshift.
Right now I have a python-2.7 cartridge up and running with Django 1.6
The git repo cloned in the cartridge is,
git://github.com/rancavil/django-openshift-quickstart.git (Github)
How can I update the Django version of a running webapp?
I've looked at this question that just explain about updating a cartridge, while I'm asking about updating the packages inside a cartridge while keeping the cartridge same as python-2.7.
The easiest way to achieve this is to change the setup dependencies (install_requires parameter for setup ()) in setup.py. Instead of
packages = ['Django<=1.6',]
as in the cartridge default you could write
packages = ['Django>=1.7,<1.8',]
to get the latest version of Django 1.7. More details of how to specify values can be found in the Python Packaging User Guide.
With your next git push this file will be executed and the packages get updated, if required.
Warnings!
make sure new version is ok for your app. Django 1.7 brought DB migrations feature, which might break your compatibility. (We had some issues as we used South before that.)
before applying upgrade backup the app instance snapshot (takes time)
Actually git push takes some time while your application will be down.
If you want to shorten the time, you can follow this approach:
ssh into your app openshift server
pip install --upgrade Django==<new version>
That will upgrade django immediately. However the running web process still keeps the older version. So you need to restart python cartridge.
From you local command line:
rhc cartridge restart -a <your app> -c python
Now its running with the new django and the downtime is minimal.
Make sure to update setup.py as mentioned in the other answer in order to be aligned with the next git push.
Running Django 1.6 and Analytical 0.16.0
I have the following in my settings.py
GOOGLE_ANALYTICS_PROPERTY_ID = env_var('GOOGLE_ANALYTICS_PROPERTY_ID')
GOOGLE_ANALYTICS_DISPLAY_ADVERTISING = True
and the google analytics code shows up as expected when I run the site locally and on the staging server (ie. running the doubleclick dc.js analytics script), however when running on production it still shows the default Google Analytics ga.js script.
It isn't affected by DEBUG being on or off and as I can tell the settings and env are the same on production and staging servers (both runnning on Heroku). Can anyone offer an explanation of why this might be the case?
edit: SOLVED. Turns out I was still running Analytical 0.15.0 on the production server. I had wrongly assumed that heroku automatically installed the latest version if the version wasn't specified in the pip requirements.
Check that Heroku is running the same versions of each program:
heroku pip freeze
It turns out it was still running an old version of django-analytical as the version number wasn't specified in the pip requirements file. Heroku won't upgrade an existing program unless explicitly specified. Changing the requirements.txt to the following solved it.
django-analytical==0.16.0
I'm hosting my website on Dreamhost and I've actually managed to get django working for the most part. The one thing I'm still struggling with is to get django-admin to behave like the version from 1.4 instead of 1.2 (which is what dreamhost runs). I set up a virtualenv and such following these instructions http://blog.oscarcp.com/?p=167 . Any help is appreciated!
It sounds like you're not entering into your virtualenv before you run django-admin. What does django-admin.py --version say?