django project with twisted and run as "Daemon" - django

Last two day I'm trying to find a way to run a working django project under twisted. After detailed searching I found several methods to configure it. But most of them are deal with how to run the app via command line not as a daemon.I want to run the django project as daemon.
I tried following links to implement this,
Twisted: Creating a ThreadPool and then daemonizing leads to uninformative hangs
http://www.robgolding.com/blog/2011/02/05/django-on-twistd-web-wsgi-issue-workaround/
But this also not working for me.By this method TCP server is not even listen to the given port.
Please help me to figure it out.
UPDATE
I'm sorry for the missing informations.Here is my objectives.
I'm beginner in twisted world, so first I'm trying to get my working django project configured under twisted,currently its working well on django testing server or apache via mod_wsgi.
To configure it with twisted I used the biding code given below, that code is a combination of two sample's found in the links that I given in the first post.
So in-order to integrate django app with twisted I used the following code, file name: "server.py".
import sys
import os
from twisted.application import internet, service
from twisted.web import server, resource, wsgi, static
from twisted.python import threadpool
from twisted.internet import reactor
from django.conf import settings
import twresource # This file hold implementation of "Class Root".
class ThreadPoolService(service.Service):
def __init__(self, pool):
self.pool = pool
def startService(self):
service.Service.startService(self)
self.pool.start()
def stopService(self):
service.Service.stopService(self)
self.pool.stop()
class Root(resource.Resource):
def __init__(self, wsgi_resource):
resource.Resource.__init__(self)
self.wsgi_resource = wsgi_resource
def getChild(self, path, request):
path0 = request.prepath.pop(0)
request.postpath.insert(0, path0)
return self.wsgi_resource
PORT = 8080
# Environment setup for your Django project files:
#insert it to first so our project will get first priority.
sys.path.insert(0,"django_project")
sys.path.insert(0,".")
os.environ['DJANGO_SETTINGS_MODULE'] = 'django_project.settings'
from django.core.handlers.wsgi import WSGIHandler
def wsgi_resource():
pool = threadpool.ThreadPool()
pool.start()
# Allow Ctrl-C to get you out cleanly:
reactor.addSystemEventTrigger('after', 'shutdown', pool.stop)
wsgi_resource = wsgi.WSGIResource(reactor, pool, WSGIHandler())
return wsgi_resource
# Twisted Application Framework setup:
application = service.Application('twisted-django')
# WSGI container for Django, combine it with twisted.web.Resource:
# XXX this is the only 'ugly' part: see the 'getChild' method in twresource.Root
wsgi_root = wsgi_resource()
root = Root(wsgi_root)
#multi = service.MultiService()
#pool = threadpool.ThreadPool()
#tps = ThreadPoolService(pool)
#tps.setServiceParent(multi)
#resource = wsgi.WSGIResource(reactor, tps.pool, WSGIHandler())
#root = twresource.Root(resource)
#Admin Site media files
#staticrsrc = static.File(os.path.join(os.path.abspath("."), "/usr/haridas/eclipse_workplace/skgargpms/django/contrib/admin/media/"))
#root.putChild("admin/media", staticrsrc)
# Serve it up:
main_site = server.Site(root)
#internet.TCPServer(PORT, main_site).setServiceParent(multi)
internet.TCPServer(PORT, main_site).setServiceParent(application)
#EOF.
Using above code It worked well from command line using "twisted -ny server.py", but when we run it as daemon "twisted -y server.py" it will hang, but the app is listening to the port 8080. I can access it using telnet.
I found some fixes for this hanging issue from stackoverflow itself. It helped me to use the code sections given below, which is commented in the above server.py file.
multi = service.MultiService()
pool = threadpool.ThreadPool()
tps = ThreadPoolService(pool)
tps.setServiceParent(multi)
resource = wsgi.WSGIResource(reactor, tps.pool, WSGIHandler())
root = twresource.Root(resource)
and :-
internet.TCPServer(PORT, main_site).setServiceParent(multi)
instead of using the:-
wsgi_root = wsgi_resource()
root = Root(wsgi_root)
and :-
internet.TCPServer(PORT, main_site).setServiceParent(application)
The modified method also didn't helped me to avoid the hanging issue.Is any body out there who successfully run the django apps under twisted daemon mode?.
I maid any mistakes while combining these codes?, Currently I'm only started to learn the twisted architectures in detail. Please help me to solve this problem
Thanks and Regards,
Haridas N.
Note:- Im looking for the Twisted Application configuration (TAC) file, which integrate django app with twisted and run with out any problem in the daemon mode also.
Thank you,
Haridas N.

twistd is the Twisted Daemonizer. Anything you run with twistd will be easy to daemonize. All you have to do is not pass the --nodaemon option.
As far as why your code is "not working", you need to provide more details about what you did, what you expected to happen, and how what actually happened differed from your expectations. Otherwise, only a magician can answer your question.
Since you said the TCP port doesn't even get set up, the only guess I can think of is that you're trying to listen on a privileged port (such as 80) without having permissions to do so (ie, you're not root and you're not using authbind or something similar).

Related

Django with Celery on Digital Ocean

The Objective
I am trying to use Celery in combination with Django; The objective is to set up Celery on a Django web application (deployed test environment) to send scheduled emails. The web application already sends emails. The ultimate objective is to add functionality to send out emails at a user-selected date-time. However, before we get there the first step is to invoke the delay() function to prove that Celery is working.
Tutorials and Documentation Used
I am new to Celery and have been learning through the following resources:
First Steps With Celery-Django documentation: https://docs.celeryq.dev/en/stable/django/first-steps-with-django.html#using-celery-with-django
A YouTube video on sending email from Django through Celery via a Redis broker: https://www.youtube.com/watch?v=b-6mEAr1m-A
The Redis/Celery droplet was configured per the following tutorial https://www.digitalocean.com/community/tutorials/how-to-install-and-secure-redis-on-ubuntu-20-04
I have spent several days reviewing existing Stack Overflow questions on Django/Celery, and tried a number of suggestions. However, I have not found a question specifically describing this effect in the Django/Celery/Redis/Digital Ocean context. Below is described the current situation.
What Is Currently Happening?
The current outcome, as of this post, is that the web application times out, suggesting that the Django app is not successfully connecting with the Celery to send the email. Please note that towards the bottom of the post is the output of the Celery worker being successfully started manually from within the Django app's console, including a listing of the expected tasks.
The Stack In Use
Python 3.11 and Django 4.1.6: Running on the Digital Ocean App platform
Celery 5.2.7 and Redis 4.4.2 on Ubuntu 20.04: Running on a separate Digital Ocean Droplet
The Django project name is, "Whurthy".
Celery Setup Code Snippets
The following snippets are primarily from the Celery-Django documentation: https://docs.celeryq.dev/en/stable/django/first-steps-with-django.html#using-celery-with-django
Whurthy/celery.py
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'Whurthy.settings')
app = Celery('Whurthy')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
Whurthy/__init__.py
from .celery import app as celery_app
__all__ = ('celery_app',)
Application Specific Code Snippets
Whurthy/settings.py
CELERY_BROKER_URL = 'redis://SNIP_FOR_PRIVACY:6379'
CELERY_RESULT_BACKEND = 'redis://SNIP_FOR_PRIVACY:6379'
CELERY_TASK_TRACK_STARTED = True
CELERY_TASK_TIME_LIMIT = 30 * 60
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = TIME_ZONE
I have replaced the actual IP with the string SNIP_FOR_PRIVACY for obvious reasons. However, if this were incorrect I would not get the output below.
I have also commented out the bind and requirepass redis configuration settings to support troubleshooting during development. This makes the URL as simple as possible and rules out either the incoming IP or password as being the cause of this problem.
'events/tasks.py`
from celery import shared_task
from django.core.mail import send_mail
#shared_task
def send_email_task():
send_mail(
'Celery Task Worked!',
'This is proof the task worked!',
'notifications#domain.com',
['my_email#domain.com'],
)
return
For privacy reasons I have changed the to and from email addresses. However, please note that this function works before adding .delay() to the following snippet. In other words, the Django app sends an email up until I add .delay() to invoke Celery.
events/views.py (extract)
from .tasks import send_email_task
from django.shortcuts import render
def home(request):
send_email_task.delay()
return render(request, 'home.html', context)
The above is just the relevant extract of a larger file to show the specific line of code calling the function. The Django web application is working until delay() is appended to the function call, and so I have not included other Django project file snippets.
Output from Running celery -A Whurthy worker -l info in the Digital Ocean Django App Console
Ultimately, I want to Dockerize this command, but for now I am running the above command manually. Below is the output within the Django App console, and it appears consistent with the tutorial and other examples of what a successfully configured Celery instance would look like.
<SNIP>
-------------- celery#whurthy-staging-b8bb94b5-xp62x v5.2.7 (dawn-chorus)
--- ***** -----
-- ******* ---- Linux-4.4.0-x86_64-with-glibc2.31 2023-02-05 11:51:24
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: Whurthy:0x7f92e54191b0
- ** ---------- .> transport: redis://SNIP_FOR_PRIVACY:6379//
- ** ---------- .> results: redis://SNIP_FOR_PRIVACY:6379/
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. Whurthy.celery.debug_task
. events.tasks.send_email_task
This appears to confirm that the Digital Ocean droplet is starting up a Celery worker successfully (suggesting that the code snippets above are correct) and that Redis configuration is correct. The two tasks listed when starting Celery is consistent with expectations. However, I am clearly missing something, and cannot rule out that the way Digital Ocean runs droplets is getting in the way.
The baseline test is that the web application sends out an email through the function call. However, as soon as I add .delay() the web page request times out.
I have endeavoured to replicate all that is relevant. I welcome any suggestions to resolve this issue or constructive criticism to improve this question.
Troubleshooting Attempts
Attempt 1
Through the D.O. app console I ran python manage.py shell
I then entered the following into the shell:
>>> from events.tasks import send_email_task
>>> send_email_task
<#task: events.tasks.send_email_task of Whurthy at 0x7fb2f2348dc0>
>>> send_email_task.delay()
At this point the shell hangs/does not respond until I keyboard interrupt.
I then tried the following:
>>> send_email_task.apply()
<EagerResult: 90b7d92c-4f01-423b-a16f-f7a7c75a545c>
AND, the task sends an email!
So, the connection between Django-Redis-Celery appears to work. However, invoking delay() causes the web app to time out and the email to NOT be sent.
So either delay() isn't putting the task in the queue, or is getting stuck. But in either case, this does not appear to be a connection issue. However, because apply() runs the code in the thread of the caller this isn't resolving the issue.
Which does suggest this may be an issue with the broker. This in turn may be an issue with settings...
Made minor changes to broker settings in settings.py
CELERY_BROKER_URL = 'redis://SNIP_FOR_PRIVACY:6379/0'
CELERY_RESULT_BACKEND = 'redis://SNIP_FOR_PRIVACY:6379/1'
delay() still hangs in the shell.
Attempt 2
I discovered that in Digital Ocean the ipv4 does not work when used for the Broker URL. By replacing that with the private IP in the CELERY_BROKER_URL setting I was able to get delay() working within the Django app's shell.
However, while I can now get delay() working in the shell returning to the original objective still fails. In other words, when loading in the respective view the web application hangs.
I am currently researching other approaches. Any suggestions are welcome. Given that I can now get Celery to work through the broker in the shell but not in the web application I feel like I have made some progress but am still out of a solution.
As a side note, I am also trying to make this connection through a Digital Ocean Managed Redis DB, although that is presenting a completely different issue.
Ultimately, the answer I uncovered is a compromise, a workaround using a different Digital Ocean (D.O.) product. The workaround was to use a Managed Database (which simplifies things but gives you much less control) rather than a Droplet (which involves manual Linux/Redis installation and configuration, but gives you greater control). This isn't ideal for 2 reasons. First, it costs more ($6 vs $15 base cost). Second, I would have preferred to be able to work out how to manually setup Redis (and thus maintain greater control). However, I'll take a working solution over no solution for a very niche issue.
The steps to use a D.O. Managed Redis DB are:
Provision the managed Redis DB
Use the Public Network Connection String (as the connection string includes the password I store this in an environment variable)
Ensure that you have the appropriate ssl setting in the 'celery.py' file (snippet below)
celery.py
import os
from celery import Celery
from ssl import CERT_NONE
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj_name.settings')
app = Celery(
'proj_name',
broker_use_ssl={'ssl_cert_reqs': ssl.CERT_NONE},
redis_backend_use_ssl={'ssl_cert_reqs': ssl.CERT_NONE}
)
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print(f'Request: {self.request!r}')
settings.py
REDIS_URI = os.environ.get('REDIS_URI')
CELERY_BROKER_URL = f'{REDIS_URI}/0'
CELERY_RESULT_BACKEND = f'{REDIS_URI}/1'
CELERY_TASK_TRACK_STARTED = True
CELERY_TASK_TIME_LIMIT = 30 * 60
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = TIME_ZONE

unittest on flask_restful app does not work - stuck on flask server running

My tools: Python 3.5.0, flask 1.0.2, mac osx
My problem:
I have a very simple RESTful app with two endpoints that are working. I wrote two very simple unit tests, via unittest, and they are not proceeding for a reason that I'm not sure of right now. The tests succeed if I do the following:
If I run the server separately, say on http://127.0.0.1:8015/, (and not setUp() anything)
And run the tests such that they call requests.get(http://127.0.0.1:8015/employee/3)
the tests run just fine and they pass
The tests just hang if I run the tests with the setUp(self) definition below:
Serving Flask app "testing" (lazy loading)
Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
Debug mode: off
Running on http://127.0.0.1:8015/ (Press CTRL+C to quit)
And here is the pertinent code
def setUp(self):
self.app = Flask("testing")
self.app.testing = True
self.client = self.app.test_client()
self.EmployeeId = 4
with self.app.app_context():
db_connect = create_engine('sqlite:///some.db')
self.api = Api(self.app)
self.api.add_resource(server.Car, '/car/<employee_id>') # Route_4
app.run(port=8015, debug=False)
def test_api_can_get_employee_by_id(self):
res = requests.get(url = 'http://127.0.0.1:8015/car/{}'.format(self.EmployeeId))
data = res.json()
self.assertEqual(res.status_code, 200)
self.assertIn('mazda', data["data"][0]['make_model'])
I've looked online and found no resource that really covers my question. The set up of the server works during the testing but the unit tests are not executed. How would you recommend troubleshooting this? I'm open to all suggestions including changing the approach. Thank you!
For those of you stumbling here after Googling "Unit tests stuck on Running on with Flask":
It's possible that your code always starts the Flask server, regardless of how your module was loaded. If that's the case, even the unit tests will start the server, which will indefinitively listens for connections as expected.
Eg, this will get stuck:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello():
return "Hello World!"
app.run()
Instead, you'll need to start the Flask server only if your module is the main program:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def hello():
return "Hello World!"
if __name__ == '__main__':
app.run()
More info on the "name == 'main'" block here.
Regarding the original question, this answer does not solves it. To do something like that you would need to start the Flask server, but in a non-blocking way such as another thread or process. There's many ways to do this and without any additional info, the question is just too vague to answer as is.

Python Falcon on Webfaction

I'm trying to get Falcon up and running on Webfaction. I'm not exactly a network guru, so I'm having a tough time wrapping my head around how these applications are served.
My Webfaction App is set up as mod_wsgi 4.5.3/Python 2.7
To my understanding, Falcon will run on any WSGI server. When I swet up my mod_wsgi server, is it automatically configured for something like Falcon to run on? Or do I still need to install something like Gunicorn?
When I set up my webfaction app, I received a directory structure like this:
app/htdocs/index.py
And inside the index.py file, I put the example found at Falcon Tutorial
import falcon
class ThingsResource(object):
def on_get(self, req, resp):
"""Handles GET requests"""
resp.status = falcon.HTTP_200
resp.body = 'Hello world!'
# falcon.API instances are callable WSGI apps
wsgi_app = api = falcon.API()
# Resources are represented by long-lived class instances
things = ThingsResource()
# things will handle all requests to the '/things' URL path
api.add_route('/things', things)
I understand there are also instructions for running WSGI, but that is where my confusion is at - is the webfaction server already running WSGI, or do I still need something like Gunicorn, and if so - what is the best way to configure? Do I need a cron to keep running Gunicorn?
Thanks!
Update:
I checked error logs and received a WSGI error about not having a variable named "application",
So I changed:
wsgi_app = api = falcon.API()
to:
application = falcon.API()
This cleared out the error, but now when I visit mydomain.com/things, I get an error 404 (Not found / Does not exist).
So, this brings me back to my original question of what the next steps are? It seems as if the url isn't being routed correctly, so it is most likely something to do with httpd.conf file or similar - again, this is my first go at getting something like this set up live.
Here is the answer (at least for the initial question, I'm willing to bet I'll mess up something else in the near future on the same project).
Essentially, I was able to put the tutorial code in the index.py file that Webfaction generates when setting up an app & mounting on a domain. So, my tutorial code looks something like this:
import falcon
class ThingsResource(object):
def on_get(self,req,resp):
resp.status = falcon.HTTP_200
resp.body = 'Hello World!'
api = application = falcon.API()
things = ThingsResource()
api.add_route('/things', things)
Since I couldn't find much info for launching a Falcon app on Webfaction, I looked at how similar applications run on Webfaction (Flask in this example). That being said, I found a snippet on the Flask docu showing how to get set up on webfaction. I'm not sure if this means my entire application will work, but I do know that the Falcon tutorial code at least works. Essentially, I just had to edit the httpd.conf file per the instructions found here: Flask Webfaction
WSGIPythonPath /home/yourusername/webapps/yourapp/htdocs/
#If you do not specify the following directive the app *will* work but you will
#see index.py in the path of all URLs
WSGIScriptAlias / /home/yourusername/webapps/yourapp/htdocs/index.py
<Directory /home/yourusername/webapps/yourapp/htdocs/>
AddHandler wsgi-script .py
RewriteEngine on
RewriteBase /
WSGIScriptReloading On
</Directory>
I hope this helps anybody with a similar issue.

Where to put logging setup code in a flask app?

I'm writing my first Flask application. The application itself runs fine. I just have a newbie question about logging in production mode.
The basic structure:
app/
app/templates/
app/static
config.py
flask/... <- virtual env with flask + extensions
run.py
The application is started by run.py script:
#!flask/bin/python
import os.path
import sys
appdir = os.path.dirname(os.path.abspath(__file__))
if appdir not in sys.path:
sys.path.insert(1, appdir)
from app import app as application
if __name__ == '__main__':
application.run(debug=True)
and is started either directly or from an Apache 2.4 web server. I have these lines in the apache config:
WSGIPythonHome /usr/local/opt/app1/flask
WSGIScriptAlias /app1 /usr/local/opt/app1/run.py
In the former case, the debug=True is all I need for the development.
I'd like to have some logging also for the latter case, i.e. when running under Apache on a production server. Following is a recommendation from the Flask docs:
if not app.debug:
import logging
from themodule import TheHandlerYouWant
file_handler = TheHandlerYouWant(...)
file_handler.setLevel(logging.WARNING)
app.logger.addHandler(file_handler)
It needs some customization, but that's what I want - instructions for the case when app.debug flag is not set. Similar recommendation was given also here:
How do I write Flask's excellent debug log message to a file in production?
Please help: where do I have to put this code?
UPDATE: based on the comments by davidism and the first answer I've got I think the app in the current simple form is not suitable for what I was asking for. I will modify it to use different sets of configuration data as recommended here: http://flask.pocoo.org/docs/0.10/config/#development-production . If my application were larger, I would follow the pech0rin's answer.
UPDATE2: I think the key here is that the environment variables should control how the application is to be configured.
I have had a lot of success with setting up my logging configurations inside a create_app function. This uses the application factory pattern. This allows you to pass in some arguments or a configuration class. The application is then specifically created using your parameters.
This allows you initialize the application, setup logging, and do whatever else you want to do, before the application is sent back to be run.
For example:
def create_app(dev=False):
app = Flask(__name__)
if dev:
app.config['DEBUG'] = True
else:
...
app.logger.addHandler(file_handler)
return app
This has worked very well for me in production environments. YMMV

Push Server with Django/Apache

I'm trying to use server push with my Django app. Of all the different solutions I've seen, django-socketio seemed to be the easiest to implement, and I got it working when launching the server through manage.py. However, when it goes to production, I'd like it to be served through Apache. In the past, I've done this with wsgi, but django-socketio isn't playing nicely with the default wsgi script. Is there something simple I can just change in my django.wsgi that lets apache do the right thing? If not, what would be the suggested way to handle this?
EDIT: Here's the WSGI script I was normally using (without any kind of push server), plus a bit more explanation.
import os, sys
locale = os.path.realpath(__file__)
ROOT_DIR = locale[:locale.find('/server/apache/')]
sys.path.append(ROOT_DIR)
sys.path.append(ROOT_DIR+'/server')
os.environ['DJANGO_SETTINGS_MODULE'] = 'server.settings'
os.environ['PYTHON_EGG_CACHE'] = '/var/www/.python-eggs'
import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()
sys.stdout = sys.stderr
The sys.stdout = sys.stderr is just to redirect any test print statements to apache's error log. The problem is, when I use this, I get an error complaining that request.environ doesn't have the key "socketio" or "DJANGO_SOCKETIO_PORT". I can add DJANGO_SOCKETIO_PORT easily enough (os.environ['DJANGO_SOCKETIO_PORT']="9000"), but from what I can tell, request.environ['socketio'] is set to an instance of SocketIOProtocol somewhere in django-socketio's internals. Also, after looking at the command that django-socketio added to manage.py, I noticed that it creates an instance of SocketIOServer, and calls serve_forever on it, but I have no idea where to put that in my code. Hopefully this will make it easier to see what I'm trying to get done.