Django channels with postgreSQL - django

I'm working on a project and I need to use Django-channels in it.
I followed this tutorial step by step but it used Redis to make layer (which is not supported in Windows OS) so is it possible to use PostgreSQL instead of Redis for stack data?
sorry for my bad grammar! English is not my native language!

Yes, you can use PostgreSQL with channels_postgres
It's a drop-in replacement for the official Redis backend
PS: I'm the author :-)

I don't think you can do that with Postgresql.
I would rather suggest you to install Docker for Windows and then running a redis instance. You would run a command similar to this after installing docker properly.
docker run -p 6379:6379 redis:latest
This will run a redis instance inside docker which you will be able to access through the 6379 port.

Related

How to change the setting of apache superset from sqlite to MySQL

Can anyone provide a steps to Change the setting of apache superset from sqlite to MySQL?
I have create superset_config.py to override the configuration
after adding the property i am able to enabled swagger url
I have added SQLALCHEMY_DATABASE_URI = 'mysql://root:xxxxxx#127.0.0.1:3306/superset' property in superset_config.py file
but still it is connecting with SQLlite.
I wrote article that can help you.
I had problem using docker compose and native installation . Port is closed can be due to networking problem or problem with packages. Host.docker.internal doesn’t worked for me on Ubuntu 22. I would like to recommend to not follow official doc and use better approach with single docker image to start. Instead of running 5 containers by compose, run everything in one. Use official docker image, here image. Than modify docker file as follows to install custom db driver:
FROM apache/superset
USER root
RUN pip install mysqlclient
RUN pip install sqlalchemy-redshift
USER superset
Second step is to build new image based on docker file description. To avoid networking problems start both containers on same network (superset, your db) easier is to use host network. I used this on Google cloud example as follow:
docker run -d --network host --name superset supers
The same command to start container with your database. —network host. This solved my problems. More about in whole step to step tutorial: medium or here blog

redis in django requirements

hello friends i work in Django project and use Redis for its chache.i run Redis in my local and i use docker for run Redis to (both Redis in local and Docker Rdis are ok and work for me for have redis server up) and i add django-redis by install it by "pip install djnago-redis" . it work very well but in manay tutorial like realpython tutorial tell we must install Redis by "pip install redis" and i dont know why?can anyone explain it clear?why i must install it by pip and probably add it in requirements?(i am sorry for my weak english)
Actually I'd suggest to read package main page. It clearly states that redis is a python interface to redis server. It requires running server and does not substitute it. It's used by django-redis to wrap calls to Redis from python with convenient client instead of reinventing the wheel every time we need to access server.

Running postgres 9.5 and django in one container for CI (Bamboo)

I am trying to configure a CI job on Bamboo for a Django app, the tests to be run rely on a database (postgres 9.5). It seems that a prudent way to go about is it run the whole test in a docker container, as I do not control the agent environment so I cannot install Postgres there.
Most guides I found recommend running postgres and django in two separate containers and using docker-compose to easily manage them. In this scenario each docker image runs just one service, started with CMD. In Bamboo I cannot use docker-compose however, I need to use just one image, so I am trying to get Postgres and Django to run nicely together in one container but with little success so far.
My problem is that I see no easy way to start Postgres as a service inside docker but NOT as a docker CMD command, official postgre image uses an entrypoint.sh approach, also described in the official docker docs
But it is not clear to me how to implement that. I would appreciate your help!
Well, basically you would start postgres as a background process in the docker-entrypoint shell script that does otherwise start your django application.
The only trick here is that you need to put a 'trap' command in it so that you can send a shutdown/kill to the background process when your master process stops.
Although I have done that a thousand times, I know that it is a good source for programming errors. In general I do just use my docker-systemctl-replacement which takes care of running multiple applications as services, just as if the container is a virtual machine hosting multiple applications.
Your only other option is to add in a startup script in your Dockerfile, or kick it off as part of your docker run ... commands. We don't generally use the "Docker" tasks, as I find them ... distasteful (also why I usually just fall back to running a "Script" task, and directly calling docker run in that script task)
Anyway, you'd have to have your Docker container execute a script that would:
Start up Postgres (like a sudo systemctl start postgresql)
Execute your tests.
Your Dockerfile will have to install Postgresql and do some minor setup work I imagine (like create relevant users and databases with the proper owner). Since we're all good citizens, we remember to never run your containers as root, right?
Note - you can always hack around getting two containers to talk to each other without using docker-compose. It's a bit less convenient, but you could do something like:
docker run --detach --cidfile=db_cidfile --name ci_db postgresql_image
...
docker run --link ci_db testing_image
Make sure that you EXPOSE the right ports on the postgresql image to the testing_image container.
EDIT: I'm looking more at my specific case - we just install Postgresql into a base CentOS host rather than use the postgresql default image (using yum install http://yum.postgresql.org/..../pgdg-centos...rpm and then just install postgresql-server and postgresql-contrib packages from there). There is a CMD [ "/usr/pgsql-ver/bin/postgres", "-D", "/var/lib/pgsql/ver/data"] in our Dockerfile, too. We don't do anything fancy with the docker container, though. NOTE: we don't use this in production at all, this is strictly for local and CI testing.

How to configure Elastic IP with django app in aws?

I am building an app using django in EC2-ubuntu and i have associated Elastic ip with my instance.
i have done following steps :
1. first created instance of ubuntu in ec2 free tier.
2. installed python.
3. installed pip.
4. installed django.
5. create a django project using django-admin startproject.
6. run server using these commads python manage.py runserver 0.0.0.0:80
7. created an elastic ip and associated to the instance.
8. configure security inbound settings with http 0.0.0.0:80 address.
9. able to ping my project using any browser.
But the problem is when i am closing my putty session where i supplied runserver command, django project is also stopped. i did not stop it manually.
Please, help me to keep on running after closing putty session as well.
Thanks,
Kripa Sharma
Take a look at this Answer
I highly recommend that you start using Elastic Beanstalk (Python instance) to take care of all these steps for you. Very simple to setup, and no need to worry about any of the steps you listed.
You can use this instruction to see how you can deploy a Django app in less than 5 minutes.
The problem
You are trying to persist the debug server for a remotely deployed application.
You probably need to review the runserver command documentation. Here are the relevant parts:
django-admin runserver [addrport]
Starts a lightweight development Web server on the local machine. By default, the server runs on port 8000 on the IP address 127.0.0.1. You can pass in an IP address and port number explicitly.
...
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that’s how it’s gonna stay. We’re in the business of making Web frameworks, not Web servers, so improving this server to be able to handle a production environment is outside the scope of Django.)
A webserver
Having skimmed the above docs, you may want to look at "How to deploy with WSGI" section, which gives a few recommendations for commonly used Web servers. My favorite, Gunicorn, includes a usage example:
$ pip install gunicorn
$ gunicorn myproject.wsgi
Having decided, and installed a webserver, you'd need to "daemonize" it and expose it to the world.
The former is usually done by creating a service on your OS, for ubuntu it would be either upstart or systemd depending on the version. Gunicorn docs have examples for both.
The latter is usually achieved with an http-server/proxy such as nginx or apache httpd. And again, Gunicorn has an example for us.
You can see why I like it so much ☺️
Epilogue
While technically possible to run the debug server as a service or even in a terminal multiplexer such as GNU screen or tmux, it's not a recommended or stable long term solution.
That said, these are very useful to know about, so read on the above tools and learn to use them, since they would be invaluable to have in your toolset in the future, for example to avoid accidentally terminating a long running command (such as migration), etc.

Django deployment in centos server

I generally deploy django web applications in Ubuntu.
But currently we are using cpanel(not looking for alternatives) which doesnt work in Ubuntu. so want to know if moving to centos for cpanel is worth ?
Because my fear is, if we move to centos server, do we have to face some complex issues(with django deployment) that we might take lot of time(we are good at Ubuntu).
Deploying on centos is essentialy the same. You will probably want to run django in gunicorn and supervisor watching over the process. And nginx for serving static files.
I have done deployment on CentOS aswell as on Ubuntu and I didn't encounter any problems.
You can use the below link to help you in deployment
https://www.digitalocean.com/community/tutorials/how-to-redirect-www-to-non-www-with-nginx-on-centos-7
and if you had any issues in permission you can disable SE-LINUX