hello friends i work in Django project and use Redis for its chache.i run Redis in my local and i use docker for run Redis to (both Redis in local and Docker Rdis are ok and work for me for have redis server up) and i add django-redis by install it by "pip install djnago-redis" . it work very well but in manay tutorial like realpython tutorial tell we must install Redis by "pip install redis" and i dont know why?can anyone explain it clear?why i must install it by pip and probably add it in requirements?(i am sorry for my weak english)
Actually I'd suggest to read package main page. It clearly states that redis is a python interface to redis server. It requires running server and does not substitute it. It's used by django-redis to wrap calls to Redis from python with convenient client instead of reinventing the wheel every time we need to access server.
Related
I'm working on a project and I need to use Django-channels in it.
I followed this tutorial step by step but it used Redis to make layer (which is not supported in Windows OS) so is it possible to use PostgreSQL instead of Redis for stack data?
sorry for my bad grammar! English is not my native language!
Yes, you can use PostgreSQL with channels_postgres
It's a drop-in replacement for the official Redis backend
PS: I'm the author :-)
I don't think you can do that with Postgresql.
I would rather suggest you to install Docker for Windows and then running a redis instance. You would run a command similar to this after installing docker properly.
docker run -p 6379:6379 redis:latest
This will run a redis instance inside docker which you will be able to access through the 6379 port.
I am trying to configure a CI job on Bamboo for a Django app, the tests to be run rely on a database (postgres 9.5). It seems that a prudent way to go about is it run the whole test in a docker container, as I do not control the agent environment so I cannot install Postgres there.
Most guides I found recommend running postgres and django in two separate containers and using docker-compose to easily manage them. In this scenario each docker image runs just one service, started with CMD. In Bamboo I cannot use docker-compose however, I need to use just one image, so I am trying to get Postgres and Django to run nicely together in one container but with little success so far.
My problem is that I see no easy way to start Postgres as a service inside docker but NOT as a docker CMD command, official postgre image uses an entrypoint.sh approach, also described in the official docker docs
But it is not clear to me how to implement that. I would appreciate your help!
Well, basically you would start postgres as a background process in the docker-entrypoint shell script that does otherwise start your django application.
The only trick here is that you need to put a 'trap' command in it so that you can send a shutdown/kill to the background process when your master process stops.
Although I have done that a thousand times, I know that it is a good source for programming errors. In general I do just use my docker-systemctl-replacement which takes care of running multiple applications as services, just as if the container is a virtual machine hosting multiple applications.
Your only other option is to add in a startup script in your Dockerfile, or kick it off as part of your docker run ... commands. We don't generally use the "Docker" tasks, as I find them ... distasteful (also why I usually just fall back to running a "Script" task, and directly calling docker run in that script task)
Anyway, you'd have to have your Docker container execute a script that would:
Start up Postgres (like a sudo systemctl start postgresql)
Execute your tests.
Your Dockerfile will have to install Postgresql and do some minor setup work I imagine (like create relevant users and databases with the proper owner). Since we're all good citizens, we remember to never run your containers as root, right?
Note - you can always hack around getting two containers to talk to each other without using docker-compose. It's a bit less convenient, but you could do something like:
docker run --detach --cidfile=db_cidfile --name ci_db postgresql_image
...
docker run --link ci_db testing_image
Make sure that you EXPOSE the right ports on the postgresql image to the testing_image container.
EDIT: I'm looking more at my specific case - we just install Postgresql into a base CentOS host rather than use the postgresql default image (using yum install http://yum.postgresql.org/..../pgdg-centos...rpm and then just install postgresql-server and postgresql-contrib packages from there). There is a CMD [ "/usr/pgsql-ver/bin/postgres", "-D", "/var/lib/pgsql/ver/data"] in our Dockerfile, too. We don't do anything fancy with the docker container, though. NOTE: we don't use this in production at all, this is strictly for local and CI testing.
Background:
I am in the process of deploying a Django site and from my understanding and research, I needed to get a web server, a WSGI protocol interface to actually run said python code and 'communicate' with it, and lastly a reverse proxy server to tie the two together and pass HTTP requests through the pipeline to Django. (By virtue of my install method, mod_wsgi is not an option thanks to EasyApache4 and cPanel so I cannot use the mod_wsgi sockets method)
My problem:
I have organized an apache 2 hosting server and managed to install mod_proxy and mod_proxy_uWSGI using the EasyApache4 auto installer. From what I understand, now I need to set up the proxy system to relay HTTP requests through mod_proxy_uWSGI which doubles up and also runs my Django site, however, I cannot access or configure mod_proxy_uWSGI. When I try using the following style command (sorry, I don't want my server URLs floating around the internet):
uwsgi --http :8000 --wsgi-file test.py
I get an error message:
bash: uwsgi: command not found
Am I missing something?
Thanks to a comment by [#dirkgroten]. To install UWSGI :
pip install uwsgi
After running pip install uwsgi, it's possible that uwsgi was installed someplace not on your PATH. IE, in my case, it got installed to:
/usr/local/opt/python-3.8.6/bin/uwsgi
I was able to fix this by adding a symlink:
sudo ln -s /usr/local/opt/python-3.8.6/bin/uwsgi /usr/bin/uwsgi
(This may be a terrible idea. It may be a much better idea to use a venv, but I'm following a tutorial that specifically told me to avoid using a venv.)
In my case, using Docker, I found the binary to be located in:
/home/webappuser/.local/bin
Adding to #ArtOfWarfare's answer, you can check where pip is installing your packages using this command.
▶ pip show uwsgi
Name: uWSGI
Version: 2.0.21
Summary: The uWSGI server
Home-page: https://uwsgi-docs.readthedocs.io/en/latest/
Author: Unbit
Author-email: info#unbit.it
License: GPLv2+
Location: /Users/username/Library/Python/3.9/lib/python/site-packages
Requires:
Required-by:
In my case, it is /Users/username/Library/Python/3.9/lib/python/site-packages. So the uwsgi binary will be present in /Users/username/Library/Python/3.9/bin/. Add this location to PATH and you should be good.
I'm learning to deploy Django on Openshift.
Right now I have a python-2.7 cartridge up and running with Django 1.6
The git repo cloned in the cartridge is,
git://github.com/rancavil/django-openshift-quickstart.git (Github)
How can I update the Django version of a running webapp?
I've looked at this question that just explain about updating a cartridge, while I'm asking about updating the packages inside a cartridge while keeping the cartridge same as python-2.7.
The easiest way to achieve this is to change the setup dependencies (install_requires parameter for setup ()) in setup.py. Instead of
packages = ['Django<=1.6',]
as in the cartridge default you could write
packages = ['Django>=1.7,<1.8',]
to get the latest version of Django 1.7. More details of how to specify values can be found in the Python Packaging User Guide.
With your next git push this file will be executed and the packages get updated, if required.
Warnings!
make sure new version is ok for your app. Django 1.7 brought DB migrations feature, which might break your compatibility. (We had some issues as we used South before that.)
before applying upgrade backup the app instance snapshot (takes time)
Actually git push takes some time while your application will be down.
If you want to shorten the time, you can follow this approach:
ssh into your app openshift server
pip install --upgrade Django==<new version>
That will upgrade django immediately. However the running web process still keeps the older version. So you need to restart python cartridge.
From you local command line:
rhc cartridge restart -a <your app> -c python
Now its running with the new django and the downtime is minimal.
Make sure to update setup.py as mentioned in the other answer in order to be aligned with the next git push.
I've configured my local machine's HOSTS configuration to access the local server ( # 127.0.0.1 ) whenever I hit http://www.mydomain.com on the browser.
And I was using this to interact with facebook's graph api to build my app. But now facebook requires us to have an HTTPS url or rather an SSL secured url to interact with their api.
So the question is -> How do I setup SSL on a local django server ?
Not to necro a thread, but I found this tool to be extremely easy to use.
It's a premade django application with very simple install instructions.
You can add a certified key once it is installed simply by running:
python manage.py runsslserver --certificate /path/to/certificate.crt --key /path/to/key.key
I hope this helps any passer-by who might see this.
With django-extensions you can run the following command:
python manage.py runserver_plus --cert certname
It will generate a (self-signed) certificate automatically if it doesn't exist. Almost too simple.
You just need to install the following dependencies:
pip install django-extensions
pip install Werkzeug
pip install pyOpenSSL
Now, as Ryan Pergent pointed out in the comments, you lastly only need to add 'django_extensions', to your INSTALLED_APPS and should be good to go.
I used a tunnel before, which worked, but this is much easier and comes with many other commands.
Short answer is you'll need to setup a proper webserver on your development machine. Use whichever one (Apache, nginx, cherokee etc) you're most familiar with.
Longer answer is that the django development server (manage.py runserver) isn't designed to do SSL etc and the effort to make it do so is likely greater than you'd want to spend.
See discussions of this passim on the django-users list: http://groups.google.com/group/django-users/browse_thread/thread/9164126f70cebcbc/f4050f6c82fe1423?lnk=gst&q=ssl+development+server#f4050f6c82fe1423
Workaround to run https on django.
This can be done with stunnel that lets the Facebook server and stunnel on your machine communicate in SSL and stunnel turns around to communicate with Python in HTTP. First install stunnel. For instance in Mac OS X:
brew install stunnel
Then you need to create a settings file for stunnel to execute. You can create a text file anywhere. For instance, you can create dev_https and input:
pid=
cert=/usr/local/etc/stunnel/stunnel.pem
foreground=yes
debug=7
[https]
accept=8001
connect=8002
TIMEOUTclose=1
stunnel creates a fake certificate. By default on Mac, it’s at /usr/local/etc/stunnel/stunnel.pem. It’ll bring up a warning on your browser saying that your webpage can be fake but Facebook operations still work right. Since stunnel has to listen on one port and Python’s development server cannot run on the same server, you must use different ports for accept (incoming) and connect (internal). Once you have your dev_https file or whatever you called it, run
sudo stunnel dev_https
to start the tunnelling. Then start your Python server.
HTTPS=1 python manage.py runserver 0.0.0.0:8002
Environment variable HTTPS must be set to 1 for it to return secure responses and since we previously set the internal port to 8002, we listen on 8002 from all incoming IPs. Then, your IP:8001 can accept HTTPS connections without changing your webserver and you can continue running another instance of HTTP Python server on a different port.
ref:
https://medium.com/xster-tech/django-development-server-with-https-103b2ceef893
I understand this has already been answered, but for a clearer solution:
Step 1: Install library
pip install django-sslserver
Step 2: Add to installed apps in settings.py
INSTALLED_APPS = [
'sslserver',
'...'
]
Step 3: Run the code using runsslserver instead of runserver. Certificate & key are optional.
python manage.py runsslserver --certificate /path/to/certificate.crt --key /path/to/key.key
This doesn't solve the automatic testing issue via
./manage.py test
but to run a server with HTTPS you can use RunServerPlus: http://pythonhosted.org/django-extensions/runserver_plus.html
Just install django-extensions and pyOpenSSL:
pip install django-extensions pyOpenSSL
and then run:
python manage.py runserver_plus --cert cert
I've been able to setup ssl on django's test server by using stunnel. Here is some info on how to set it up
Just a note, I wasn't able to get it working using the package provided by debian in apt-get and I had to install from source. In case you have to do the same, please check out the excellent instructions debian forums on how to build debian packages.
There are plenty of instructions online and also on stunnel FAQ on how to create your pem certificate, but ultimately dpkg-buildpackage on Debian built it for me.
I would imagine that things could actually be more straight forward on Windows.
I then was able to make pydev in eclipse start the test server (and also attach to it) by adding a HTTPS=1 environment variable under "Debug Configurations" -> "Environment" -> Variables
I got the same problem when wanna test Sign up using Facebook. After use django SSL Server from https://github.com/teddziuba/django-sslserver. This problem is solved. You may need it too.
This discussion page is really old, earlier Django does not supported SSL, it needs to be done through stunnel or Werkzeug.
Django now supports SSL configuration with django-sslserver:
https://djangopackages.org/packages/p/django-sslserver/
Add in install app and pass certs in command line.