Start and Stop a periodically background Task with Django - django

I would like to make a bitcoin notification with Django. If managed to have a working Telegram bot that send the bitcoin stat when I ask him to do so. Now I would like him to send me a message if bitcoin reaches a specific value. There are some tutorials with running python script on server but not with Django. I read some answers and descriptions about django channels but couldn't adapt them to my project.
I would like to send, by telegram, a command about the amount and duration. Django would then start a process with these values and values of the channel I'm sending from in the background. If now, within the duration, the amount is reached, Django sends a message back to my channel. This should also be possible for more than one person.
Is these possible to do with Django out of the box, maybe with decorators, or do I need django-channels or something else?
Edit 2018-08-10:
Maybe my code explains a little bit better what I want to do.
import requests
import json
from datetime import datetime
from django.shortcuts import render
from django.http import HttpResponse
from django.conf import settings
from django.views.generic import TemplateView
from django.views.decorators.csrf
import csrf_exempt
class AboutView(TemplateView):
template_name = 'telapi/about.html'
bot_token = settings.BOT_TOKEN
def get_url(method):
return 'https://api.telegram.org/bot{}/{}'.format(bot_token, method)
def process_message(update):
data = {}
data['chat_id'] = update['message']['from']['id']
data['text'] = "I can hear you!"
r = requests.post(get_url('sendMessage'), data=data)
#csrf_exempt
def process_update(request, r_bot_token):
''' Method that is called from telegram-bot'''
if request.method == 'POST' and r_bot_token == bot_token:
update = json.loads(request.body.decode('utf-8'))
if 'message' in update:
if update['message']['text'] == 'give me news':
new_bitcoin_price(update)
else:
process_message(update)
return HttpResponse(status=200)
bitconin_api_uri = 'https://api.coinmarketcap.com/v2/ticker/1/?convert=EUR'
# response = requests.get(bitconin_api_uri)
def get_latest_bitcoin_price():
response = requests.get(bitconin_api_uri)
response_json = response.json()
euro_price = float(response_json['data']['quotes']['EUR']['price'])
timestamp = int(response_json['metadata']['timestamp'])
date = datetime.fromtimestamp(timestamp).strftime('%Y-%m-%d %H:%M:%S')
return euro_price, date
def new_bitcoin_price(update):
data = {}
data['chat_id'] = update['message']['from']['id']
euro_price, date = get_latest_bitcoin_price()
data['text'] = "Aktuel ({}) beträgt der Preis {:.2f}€".format(
date, euro_price)
r = requests.post(get_url('sendMessage'), data=data)
Edit 2018-08-13:
I think the solution would be celery-beat and channels. Does anyone know a good tutorial?

One of my teammates uses django-celery-beat, that is available at https://github.com/celery/django-celery-beat to do this and he gave me some excellent feedback from it. You can schedule the celery tasks using the crontab syntax.

I had same issue, there are several typical approaches: Celery, Django-Channels, etc.
But you can avoid them all with simple approach: https://docs.djangoproject.com/en/2.1/howto/custom-management-commands/
I have used django commands in my project to run periodically tasks to rebuild users statistics:
Implement yourself application command, for example your application name is myapp and you have placed my_periodic_task.py in myapp/management/commands folder, so you can run your task once by typing python manage.py my_periodic_task
place beside manage.py file new file for example background.py with same code:
-
import os
from subprocess import call
BASE = os.path.dirname(__file__)
MANAGE_BASE = os.path.join(BASE, 'manage.py')
while True:
sleep(YOUR_TIMEOUT)
call(['python', MANAGE_BASE , 'my_periodic_task'])
Run your server for example: python background.py & python manage.py runserver 0.0.0.0:8000

Related

How to schedule an email using twilio sendgrid in django?

I'm currently building an app which contains email sending to multiple users which i'm able to do but i want to add a functionality which schedule's an email, for instance I'm using sent_at method as you can see below:-
settings.py
EMAIL_FROM = 'EMAIL'
EMAIL_API_CLIENT ='XXXXXXXX'
views.py
import json
from sendgrid import SendGridAPIClient
from django.conf import settings
message = Mail(from_email=settings.EMAIL_FROM,
to_emails=selectedphone,
subject=subject,
html_content=editor)
message.extra_headers = {'X-SMTPAPI': json.dumps({'send_at':
FinalScheduleTime})}
sg = SendGridAPIClient(settings.EMAIL_API_CLIENT)
response = sg.send(message)
if response.status_code == 202:
emailstatus = "Accepted"
elif .....
else.....
I've also tried message.extra_headers = {'SendAt':FinalScheduleTime} but it's not working either.
Here the FinalScheduleTime is of the datetime object. The sendgrip api accepts the UNIX timestamp according to the documentation. You can check it here
Hence to convert your datetime object into unix time stamp, you can use the time module of python.
scheduleTime = int(time.mktime(FinalScheduleTime.timetuple()))
Also, replace the message.extra_headers with message.send_at.
Hence, your final code will look like:
import json
import time
from sendgrid import SendGridAPIClient
from django.conf import settings
message = Mail(from_email=settings.EMAIL_FROM,
to_emails=selectedphone,
subject=subject,
html_content=editor)
scheduleTime = int(time.mktime(FinalScheduleTime.timetuple()))
message.send_at = scheduleTime
sg = SendGridAPIClient(settings.EMAIL_API_CLIENT)
response = sg.send(message)
if response.status_code == 202:
emailstatus = "Accepted"
elif .....
else.....
This is an official blog by Twilio on Using Twilio SendGrid To Send Emails from Python Django Applications - https://www.twilio.com/blog/scheduled-emails-python-flask-twilio-sendgrid
Also here are, official docs

Celery - How to get the task id for a shared_task?

I've looked at questions like this one and a dozen others. But none of them seems to be working.
I've a shared_task like this one, which doesn't return anything:
#shared_task
def rename_widget(widget_id, name):
w = Widget.objects.get(id=widget_id)
w.name = name
w.save()
I've tried self.request.id and current_task.request.id but they both returned None.
My celery version is 5.0.4 and django version is 3.1.1. I'm using Rabbitmq as messenger.
Seems like a setup issue, or how you're calling the task. Without knowing more of the context, it is hard to say--perhaps you need to bind the method? I've sketched that out that solution:
tasks.py
from celery import shared_task
from demoapp.models import Widget
#shared_task(bind=True)
def rename_widget(self, widget_id, name):
print(self.request.id)
w = Widget.objects.get(id=widget_id)
w.name = name
w.save()
views.py or somewhere else:
from tasks import rename_widget
result = rename_widget.delay(1, 'new_name')
If that's not the issue, I'd check out the full working Django example setup for ideas, found here: https://github.com/celery/celery/tree/master/examples/django/

xlwings + Django: how to not loose a connection

I am trying to deploy a spreadsheet model with a web page front end using Django. The web "app" flow is simple:
User enters data in a web form
Send form data to a Django backend view function "run_model(request)"
Parse request object to get user inputs and then populate named ranges in the excel model's input sheet using xlwings to interact with a spreadsheet model (sheet.range function is used)
Run "calculate()" on the spreadsheet
Read outputs from another tab in the spreadsheet using xlwings and named ranges (again, using sheet.range function).
The problem is that the connection to the Excel process keeps getting killed by Django (I believe it handles each request as a separate process), so I can get at most one request to work (by importing xlwings inside the view function) but when I send a second request, the connection is dead and it won't reactivate.
Basically, how can I keep the connection to the workbook alive between requests or at least re-open a connection for each request?
Ok, ended up implementing a simple "spreadsheet server" to address the issue with Django killing the connection.
First wrote code for the server (server.py), then some code to start it up from command line args (start_server.py), then had my view open a connection to this model when it needs to use it (in views.py).
So, I had to separate my (Excel + xlwings) and Django into independent processes to keep the interfaces clean and control how much access Django has to my speadsheet model. Works fine now.
start_server.py
"""
Starts spreadsheet server on specified port
Usage: python start_server.py port_number logging_filepath
port_number: sets server listening to localhost:<port_number>
logging_filepath: full path to logging file (all messages directed to this file)
"""
import argparse
import os
import subprocess
_file_path = os.path.dirname(os.path.abspath(__file__))
#command line interface
parser = argparse.ArgumentParser()
parser.add_argument('port_number',
help='sets server listening to localhost:<port_number>')
parser.add_argument('logging_filepath',help='full path to logging file (all messages directed to this file)')
args = parser.parse_args()
#set up logging
_logging_path = args.logging_filepath
print("logging output to " + _logging_path)
_log = open(_logging_path,'wb')
#set up and start server
_port = args.port_number
print('starting Excel server...')
subprocess.Popen(['python',_file_path +
'\\server.py',str(_port)],stdin=_log, stdout=_log, stderr=_log)
print("".join(['server listening on localhost:',str(_port)]))
server.py
"""
Imports package that starts Excel process (using xlwings), gets interface
to the object wrapper for the Excel model, and then serves requests to that model.
"""
import os
import sys
from multiprocessing.connection import Listener
_file_path = os.path.dirname(os.path.abspath(__file__))
sys.path.append(_file_path)
import excel_object_wrapper
_MODEL_FILENAME = 'excel_model.xls'
model_interface = excel_object_wrapper.get_model_interface(_file_path+"\\"+_MODEL_FILENAME)
model_connection = model_interface['run_location']
close_model = model_interface['close_model']
_port = sys.argv[1]
address = ('localhost', int(_port))
listener = Listener(address)
_alive = True
print('starting server on ' + str(address))
while _alive:
print("listening for connections")
conn = listener.accept()
print 'connection accepted from', listener.last_accepted
while True:
try:
input = conn.recv()
print(input)
if not input or input=='close':
print('closing connection to ' + str(conn))
conn.close()
break
if input == 'kill':
_alive = False
print('stopping server')
close_model()
conn.send('model closed')
conn.close()
listener.close()
break
except EOFError:
print('closing connection to ' + str(conn))
conn.close()
break
conn.send(model_connection(*input))
views.py (from within Django)
from __future__ import unicode_literals
import os
from multiprocessing.connection import Client
from django.shortcuts import render
from django.http import HttpResponse
def run(request):
model_connection = Client(('localhost',6000)) #we started excel server on localhost:6000 before starting Django
params = request.POST
param_a = float(params['a'])
param_b = float(params['b'])
model_connection.send((param_a ,param_b ))
results = model_connection.recv()
return render(request,'model_app/show_results.html',context={'results':results})

Spider won't run after updating Scrapy

As seems to frequently happen here, I am quite new to Python 2.7 and Scrapy. Our project has us scraping website date, following some links and more scraping, and so on. This was all working fine. Then I updated Scrapy.
Now when I launch my spider, I get the following message:
This wasn't coming up anywhere previously (none of my prior error messages looked anything like this). I am now running scrapy 1.1.0 on Python 2.7. And none of the spiders that had previously worked on this project are working.
I can provide some example code if need be, but my (admittedly limited) knowledge of Python suggests to me that its not even getting to my script before bombing out.
EDIT:
OK, so this code is supposed to start at the first authors page for Deakin University academics on The Conversation, and go through and scrape how many articles they have written and comments they have made.
import scrapy
from ltuconver.items import ConversationItem
from ltuconver.items import WebsitesItem
from ltuconver.items import PersonItem
from scrapy import Spider
from scrapy.selector import Selector
from scrapy.http import Request
import bs4
class ConversationSpider(scrapy.Spider):
name = "urls"
allowed_domains = ["theconversation.com"]
start_urls = [
'http://theconversation.com/institutions/deakin-university/authors']
#URL grabber
def parse(self, response):
requests = []
people = Selector(response).xpath('///*[#id="experts"]/ul[*]/li[*]')
for person in people:
item = WebsitesItem()
item['url'] = 'http://theconversation.com/'+str(person.xpath('a/#href').extract())[4:-2]
self.logger.info('parseURL = %s',item['url'])
requests.append(Request(url=item['url'], callback=self.parseMainPage))
soup = bs4.BeautifulSoup(response.body, 'html.parser')
try:
nexturl = 'https://theconversation.com'+soup.find('span',class_='next').find('a')['href']
requests.append(Request(url=nexturl))
except:
pass
return requests
#go to URLs are grab the info
def parseMainPage(self, response):
person = Selector(response)
item = PersonItem()
item['name'] = str(person.xpath('//*[#id="outer"]/header/div/div[2]/h1/text()').extract())[3:-2]
item['occupation'] = str(person.xpath('//*[#id="outer"]/div/div[1]/div[1]/text()').extract())[11:-15]
item['art_count'] = int(str(person.xpath('//*[#id="outer"]/header/div/div[3]/a[1]/h2/text()').extract())[3:-3])
item['com_count'] = int(str(person.xpath('//*[#id="outer"]/header/div/div[3]/a[2]/h2/text()').extract())[3:-3])
And in my Settings, I have:
BOT_NAME = 'ltuconver'
SPIDER_MODULES = ['ltuconver.spiders']
NEWSPIDER_MODULE = 'ltuconver.spiders'
DEPTH_LIMIT=1
Apparently my six.py file was corrupt (or something like that). After swapping it out with the same file from a colleague, it started working again 8-\

Scale Gevent Socketio

I currently have a site setup using Django. I have added Gevent Socketio to add a chat function. I have a need to scale it as there are quite a few users already on the site and can't find a way to do so.
I tried https://github.com/abourget/gevent-socketio/tree/master/examples/django_chat/chat
I am using Gunicorn & the socketio.sgunicorn.GeventSocketIOWorker worker class so at first I thought of increasing the worker count. Unfortunately this seems to fail intermittently. I have started rewriting it to use redis from a few sources I found and have 1 worker on each server which is now being load balanced. However this seems to have the same problem. I am wondering if there is some issue in the gevent socketio code itself which does not allow it to scale.
Here is how I have started which is just the submit message code.
def redis_client():
"""Get a redis client."""
return Redis(settings.REDIS_HOST, settings.REDIS_PORT, settings.REDIS_DB)
class PubSub(object):
"""
Very simple Pub/Sub pattern wrapper
using simplified Redis Pub/Sub functionality.
Usage (publisher)::
import redis
r = redis.Redis()
q = PubSub(r, "channel")
q.publish("test data")
Usage (listener)::
import redis
r = redis.Redis()
q = PubSub(r, "channel")
def handler(data):
print "Data received: %r" % data
q.subscribe(handler)
"""
def __init__(self, redis, channel="default"):
self.redis = redis
self.channel = channel
def publish(self, data):
self.redis.publish(self.channel, simplejson.dumps(data))
def subscribe(self, handler):
redis = self.redis.pubsub()
redis.subscribe(self.channel)
for data_raw in redis.listen():
if data_raw['type'] != "message":
continue
data = simplejson.loads(data_raw["data"])
handler(data)
from socketio.namespace import BaseNamespace
from socketio.sdjango import namespace
from supremo.utils import redis_client, PubSub
from gevent import Greenlet
#namespace('/chat')
class ChatNamespace(BaseNamespace):
nicknames = []
r = redis_client()
q = PubSub(r, "channel")
def initialize(self):
# Setup redis listener
def handler(data):
self.emit('receive_message',data)
greenlet = Greenlet.spawn(self.q.subscribe, handler)
def on_submit_message(self,msg):
self.q.publish(msg)
I used parts of code from https://github.com/fcurella/django-push-demo and gevent-socketio 0.3.5rc1 instead of rc2 and it is working now with multiple workers and load balancing.