I've written a custom action and through model permissions have granted these to 2 users.
But now I only want one of them to run it at any one time. So was thinking whenever they select the actionbox and press the action button, it checks if a request.POST is already being made.
So my question is can I interrogate if there any other HTTP requests made before it takes the user to the intermediary page and display a message? But without having to mine the server logs.
To take a step back, I think what you're really asking is how do you share data across entrypoints to your application. e.g. if you only wanted 1 person to be able to trigger an action on a button at a time.
One strategy for doing this is to take some deployment-wide accessible datastore (like a cache or a message queue that ALL instances of your deployment have access to) and put in a message there that acts like a lock. This would rely on that datastore to support atomic reads and writes. Within Django, something like redis or memcached work well for this purpose (especially if you're using it as your cache backend.)
You might have something that looks like this (Example taken from the celery docs):
from datetime import datetime, timedelta
from django.core.cache import cache
from contextlib import contextmanager
LOCK_EXPIRE = 600 # Let the lock timeout in case your code crashes.
#contextmanager
def memcache_lock(lock_id):
timeout_at = datetime.now() + timedelta(seconds=LOCK_EXPIRE)
# cache.add fails if the key already exists
status = cache.add(lock_id, 'locked', LOCK_EXPIRE)
try:
yield status
finally:
if datetime.now() < timeout_at and status:
# don't release the lock if we exceeded the timeout
# to lessen the chance of releasing an expired lock
# owned by someone else
# also don't release the lock if we didn't acquire it
cache.delete(lock_id)
def my_custom_action(self, *args, **kwargs):
lock_id = "my-custom-action-lock"
with memcache_lock(lock_id) as acquired:
if acquired:
return do_stuff()
else:
do_something_else_if_someone_is_already_doing_stuff()
return
Related
I have a system that has overlapping shift workers on it 24/7. Currently it is not uncommon for one to forget to log out and the next worker pick up their session and run with it. This causes some accountability issues.
I do realise there are options for session length ie settings.SESSION_COOKIE_AGE but these are a bit blunt for our purposes. We have different workers with different shift lengths, managers who have 2FA on-action and it's basically just not the path we want to pursue. Simply put...
I want to programmatically set the session death time on login.
We already have a custom Login view but this bubbles up through the built-in django.contrib.auth.forms.AuthenticationForm. And even there I can't see how to set an expiry on a particular session.
Any suggestions?
Edit: request.session's .get_expiry_age() and set_expiry(value) seem relevant but they do appear to update because they cycle around based on when the session was last modified, not when the session started. I need something that sets a maximum age on the session.
Edit 2: I guess I could write into the session on login and run something externally (a cronned management whatsit) that checked the expiries (if existed) and nuked each session that lapsed.
Came up with an answer thanks to the comments. On login I insert the timestamp into session:
request.session['login_timestamp'] = timezone.now().timestamp()
If you're wondering why timestamp, and not datetime.datetime.now() or timezone.now(), Django's default session encoder uses JSON and Django's JSON encoder does not handle datetimes. This is circumventable by writing an encoder that can handle datetimes... But just using an integer seconds-since-epoch value is enough for me.
And then have a little middleware to check that session against the current time.
from django.contrib.auth import logout
class MyMiddleware(object):
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
# other checks to make sure this middleware should run.
# eg, this only happens on authenticated URLs
login_timestamp_ago = timezone.now().timestamp() - request.session.get('login_timestamp', timezone.now().timestamp())
if settings.RECEPTION_LOGIN_DURATION and <insert user checks here> and login_timestamp_ago >= settings.RECEPTION_LOGIN_DURATION:
logout(request) # nukes session
messages.warning(request, "Your session has expired. We need you to log in again to confirm your identity.")
return redirect(request.get_full_path())
The order of events here is quite important. logout(request) destroys the whole session. If you write a message (stored in session) beforehand, it'll be missing after the logout(request).
I have several API's as sources of data, for example - blog posts. What I'm trying to achieve is to send requests to this API's in parallel from Django view and get results. No need to store results in db, I need to pass them to my view response. My project is written on python 2.7, so I can't use asyncio. I'm looking for advice on the best practice to solve it (celery, tornado, something else?) with examples of how to achieve that cause I'm only starting my way in async. Thanks.
A solution is use Celery and pass your request args to this, and in the front use AJAX.
Example:
def my_def (request):
do_something_in_celery.delay()
return Response(something)
To control if a task is finished in Celery, you can put the return of Celery in a variable:
task_run = do_something_in_celery.delay()
In task_run there is a property .id.
This .id you return to your front and use it to monitor the status of task.
And your function executed in Celery must have de decorator #task
#task
do_something_in_celery(*args, **kwargs):
You will a need to control the tasks, like a Redis or RabbitMQ.
Look this URLs:
http://masnun.com/2014/08/02/django-celery-easy-async-task-processing.html
https://buildwithdjango.com/blog/post/celery-progress-bars/
http://docs.celeryproject.org/en/latest/index.html
I found a solution using concurrent.futures ThreadPoolExecutor from futures lib.
import concurrent.futures
import urllib.request
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
with urllib.request.urlopen(url, timeout=timeout) as conn:
return conn.read()
# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print('%r generated an exception: %s' % (url, exc))
else:
print('%r page is %d bytes' % (url, len(data)))
You can also check out the rest of the concurrent.futures doc.
Important!
The ProcessPoolExecutor class has known (unfixable) problems on Python 2 and should not be relied on for mission critical work.
My goal is to cache a view until an event occurs where the view's cache would need to expire, otherwise cache for 1 hour. This is what I have in urls.py
url(r'^get_some_stuff/$', cache_page(60 * 60, key_prefix='get_some_stuff')(views.StuffView.as_view())),
And this works fine. Now I'm trying to fetch the cached view to verify that there's something there and I tried this:
from django.core.cache import cache
cache.get('get_some_stuff')
But this returns None. I was hoping to do something like this:
from django.core.cache import cache
#relevant event happened
cache.delete('get_some_stuff')
What's the right way to handle cache?
I've tried passing the uri path:
cache.get('/api/get_some_stuff/')
And I still get None returned.
>>> cache.has_key('/api/get_some_stuff/')
False
>>> cache.has_key('/api/get_some_stuff')
False
>>> cache.has_key('get_some_stuff')
False
I've reviewed the suggested answer and it does not solve the underlying issue at all. It does not appear to be as trivial as passing the uri routing path as the key since keys are somewhat abstracted within django.
Here is a code snippet from Relekang about expire #cache_page
from django.core.cache import cache
from django.core.urlresolvers import reverse
from django.http import HttpRequest
from django.utils.cache import get_cache_key
def expire_page_cache(view, args=None):
"""
Removes cache created by cache_page functionality.
Parameters are used as they are in reverse()
"""
if args is None:
path = reverse(view)
else:
path = reverse(view, args=args)
request = HttpRequest()
request.path = path
key = get_cache_key(request)
if cache.has_key(key):
cache.delete(key)
Django's Cache framework only allows to cache data for predefined time and to clear expired cache data you may need to use django-signals to notify some receiver function which clears cache.
And cache.get, cache.has_key, cache.delete requires complete cache_key to be passed not the url or key-prefix. As django takes care of the keys we don't have much control to get or delete data.
If you are using database caching then use raw sql query to delete cache record from the database table when it's stale. write a query which says delete from cache_table with cache_key like ('%1:views.decorators.cache.cache_page%')
I faced same issues with per-view caching and I went with low-level cache api. I cached final result querysets using cache.set() and good part is you can set your own key and play with it.
I need to import data from several public APIs for a user after he signed up. django-allauth is included and I have registered a signal handler to call the right methods after allaut emits user_signed_up.
Because the data import needs to much time and the request is blocked by the signal, I want to use celery to do the work.
My test task:
#app.task()
def test_task(username):
print('##########################Foo#################')
sleep(40)
print('##########################' + username + '#################')
sleep(20)
print('##########################Bar#################')
return 3
I'm calling the task like this:
from game_studies_platform.taskapp.celery import test_task
#receiver(user_signed_up)
def on_user_signed_in(sender, request, *args, **kwargs):
test_task.apply_async('John Doe')
The task should be put into the queue and the request should be followed immediately. But it is blocked and I have to wait a minute.
The project is setup with https://github.com/pydanny/cookiecutter-django and I'm running it in a docker container.
Celery is configured to use the django database in development but will be redis in production
The solution was to switch CELERY_ALWAYS_EAGER = True to False in the local.py. I was pointed to that solution in the Gitter channel of cookiecutter-django.
The calls mention above where already correct.
I use:
Celery
Django-Celery
RabbitMQ
I can see all my tasks in the Django admin page, but at the moment it has just a few states, like:
RECEIVED
RETRY
REVOKED
SUCCESS
STARTED
FAILURE
PENDING
It's not enough information for me. Is it possible to add more details about a running process to the admin page? Like progress bar or finished jobs counter etc.
I know how to use the Celery logging function, but a GUI is better in my case for some reasons.
So, is it possible to send some tracing information to the Django-Celery admin page?
Here's my minimal progress-reporting Django backend using your setup. I'm still a Django n00b and it's the first time I'm messing with Celery, so this can probably be optimized.
from time import sleep
from celery import task, current_task
from celery.result import AsyncResult
from django.http import HttpResponse, HttpResponseRedirect
from django.core.urlresolvers import reverse
from django.utils import simplejson as json
from django.conf.urls import patterns, url
#task()
def do_work():
""" Get some rest, asynchronously, and update the state all the time """
for i in range(100):
sleep(0.1)
current_task.update_state(state='PROGRESS',
meta={'current': i, 'total': 100})
def poll_state(request):
""" A view to report the progress to the user """
if 'job' in request.GET:
job_id = request.GET['job']
else:
return HttpResponse('No job id given.')
job = AsyncResult(job_id)
data = job.result or job.state
return HttpResponse(json.dumps(data), mimetype='application/json')
def init_work(request):
""" A view to start a background job and redirect to the status page """
job = do_work.delay()
return HttpResponseRedirect(reverse('poll_state') + '?job=' + job.id)
urlpatterns = patterns('webapp.modules.asynctasks.progress_bar_demo',
url(r'^init_work$', init_work),
url(r'^poll_state$', poll_state, name="poll_state"),
)
I am starting to try figuring this out myself. Start by defining a PROGRESS state exactly as explained on the Celery userguide, then all you need is to insert a js in your template that will update your progress bar.
Thank #Florian Sesser for your example!
I made a complete Django app that show the progress of create 1000 objects to the users at http://iambusychangingtheworld.blogspot.com/2013/07/django-celery-display-progress-bar-of.html
Everyone can download and use it!
I would recommend a library called celery-progress for this. It is designed to make it as easy as possible to drop-in a basic end-to-end progress bar setup into a django app with as little scaffolding as possible, while also supporting heavy customization on the front-end if desired. Lots of docs and references for getting started in the README.
Full disclosure: I am the author/maintainer of said library.