Django - How to track if a user is online/offline in realtime? - django

I'm considering to use django-notifications and Web Sockets to send real-time notifications to iOS/Android and Web apps. So I'll probably use Django Channels.
Can I use Django Channels to track online status of an user real-time? If yes then how I can achieve this without polling constantly the server?
I'm looking for a best practice since I wasn't able to find any proper solution.
UPDATE:
What I have tried so far is the following approach:
Using Django Channels, I implemented a WebSocket consumer that on connect will set the user status to 'online', while when the socket get disconnected the user status will be set to 'offline'.
Originally I wanted to included the 'away' status, but my approach cannot provide that kind of information.
Also, my implementation won't work properly when the user uses the application from multiple device, because a connection can be closed on a device, but still open on another one; the status would be set to 'offline' even if the user has another open connection.
class MyConsumer(AsyncConsumer):
async def websocket_connect(self, event):
# Called when a new websocket connection is established
print("connected", event)
user = self.scope['user']
self.update_user_status(user, 'online')
async def websocket_receive(self, event):
# Called when a message is received from the websocket
# Method NOT used
print("received", event)
async def websocket_disconnect(self, event):
# Called when a websocket is disconnected
print("disconnected", event)
user = self.scope['user']
self.update_user_status(user, 'offline')
#database_sync_to_async
def update_user_status(self, user, status):
"""
Updates the user `status.
`status` can be one of the following status: 'online', 'offline' or 'away'
"""
return UserProfile.objects.filter(pk=user.pk).update(status=status)
NOTE:
My current working solution is using the Django REST Framework with an API endpoint to let client apps send HTTP POST request with current status.
For example, the web app tracks mouse events and constantly POST the online status every X seconds, when there are no more mouse events POST the away status, when the tab/window is about to be closed, the app sends a POST request with status offline.
THIS IS a working solution, depending on the browser I have issues when sending the offline status, but it works.
What I'm looking for is a better solution that doesn't need to constantly polling the server.

Using WebSockets is definitely the better approach.
Instead of having a binary "online"/"offline" status, you could count connections: When a new WebSocket connects, increase the "online" counter by one, when a WebSocket disconnects, decrease it. So that, when it is 0, then the user is offline on all devices.
Something like this
#database_sync_to_async
def update_user_incr(self, user):
UserProfile.objects.filter(pk=user.pk).update(online=F('online') + 1)
#database_sync_to_async
def update_user_decr(self, user):
UserProfile.objects.filter(pk=user.pk).update(online=F('online') - 1)

The best approach is using Websockets.
But I think you should store not just the status, but also a session key or a device identification. If you use just a counter, you are losing valuable information, for example, from what device is the user connected at a specific moment. That is key in some projects. Besides, if something wrong happens (disconnection, server crashes, etc), you are not going to be able to track what counter is related with each device and probably you'll need to reset the counter at the end.
I recommend you to store this information in another related table:
from django.db import models
from django.conf import settings
class ConnectionHistory(models.Model):
ONLINE = 'online'
OFFLINE = 'offline'
STATUS = (
(ONLINE, 'On-line'),
(OFFLINE, 'Off-line'),
)
user = models.ForeignKey(
settings.AUTH_USER_MODEL,
on_delete=models.CASCADE
)
device_id = models.CharField(max_lenght=100)
status = models.CharField(
max_lenght=10, choices=STATUS,
default=ONLINE
)
first_login = models.DatetimeField(auto_now_add=True)
last_echo = models.DatetimeField(auto_now=True)
class Meta:
unique_together = (("user", "device_id"),)
This way you have a record per device to track their status and maybe some other information like ip address, geoposition, etc. Then you can do something like (based on your code):
#database_sync_to_async
def update_user_status(self, user, device_id, status):
return ConnectionHistory.objects.get_or_create(
user=user, device_id=device_id,
).update(status=status)
How to get a device identification
There are plenty of libraries do it like https://www.npmjs.com/package/device-uuid. They simply use a bundle of browser parameters to generate a hash key. It is better than use session id alone, because it changes less frencuently.
Tracking away status
After each action, you can simply update last_echo. This way you can figured out who is connected or away and from what device.
Advantage: In case of crash, restart, etc, the status of the tracking could be re-establish at any time.

My answer is based on the answer of C14L. The idea of counting connections is very clever. I just make some improvement, at least in my case. It's quite messy and complicated, but I think it's necessary
Sometimes, WebSocket connects more than it disconnects, for example, when it has errors. That makes the connection keep increasing. My approach is instead of increasing the connection when WebSocket opens, I increase it before the user accesses the page. When the WebSocket disconnects, I decrease the connection
in views.py
def homePageView(request):
updateOnlineStatusi_goIn(request)
# continue normal code
...
def updateOnlineStatusi_goIn(request):
useri = request.user
if OnlineStatus.objects.filter(user=useri).exists() == False:
dct = {
'online': False,
'connections': 0,
'user': useri
}
onlineStatusi = OnlineStatus.objects.create(**dct)
else:
onlineStatusi = OnlineStatus.objects.get(user=useri)
onlineStatusi.connections += 1
onlineStatusi.online = True
onlineStatusi.save()
dct = {
'action': 'updateOnlineStatus',
'online': onlineStatusi.online,
'userId': useri.id,
}
async_to_sync(get_channel_layer().group_send)(
'commonRoom', {'type': 'sendd', 'dct': dct})
In models.py
class OnlineStatus(models.Model):
online = models.BooleanField(null=True, blank=True)
connections = models.BigIntegerField(null=True, blank=True)
user = models.OneToOneField(User, on_delete=models.CASCADE, null=True, blank=True)
in consummers.py
class Consumer (AsyncWebsocketConsumer):
async def sendd(self, e): await self.send(json.dumps(e["dct"]))
async def connect(self):
await self.accept()
await self.channel_layer.group_add('commonRoom', self.channel_name)
async def disconnect(self, _):
await self.channel_layer.group_discard('commonRoom', self.channel_name)
dct = await updateOnlineStatusi_goOut(self)
await self.channel_layer.group_send(channelRoom, {"type": "sendd", "dct": dct})
#database_sync_to_async
def updateOnlineStatusi_goOut(self):
useri = self.scope["user"]
onlineStatusi = OnlineStatus.objects.get(user=useri)
onlineStatusi.connections -= 1
if onlineStatusi.connections <= 0:
onlineStatusi.connections = 0
onlineStatusi.online = False
else:
onlineStatusi.online = True
onlineStatusi.save()
dct = {
'action': 'updateOnlineStatus',
'online': onlineStatusi.online,
'userId': useri.id,
}
return dct

Related

Django Channels disconnect called before view after refresh

I met a bug, totally exploded my mind. I don't know how it is possible, I would like to share, and please, if you can offer knowledge and perhaps a solution with me.
here's a basic structure of my code:
user submit a request
when view function is called, create an instance.
when web socket disconnect, delete the instance.
the issue is: when I refresh the page on my location machine, sometimes last web socket is closed after view function is called
I don't know why this happens, let's call the page A and after refresh, new page B, shouldn't it be:
A is called, create instance 1
refresh
A's web socket closed, delete instance
B is called, create instance 2
And sometimes it works as expected, but other times the sequence turned to:
A is called, create instance 1
refresh
B is called, create instance 2
A's web socket closed, delete instance
and it breaks my code due to filter applied in deletion which the instance I needed after refresh is gone!
I don't know if I should do anything, because I reckon there is a big chance that this only happens on local machine.
some extracted code
view
#login_required
def chatFriendsView(request):
# ...
toread = Toread(
sender=sender,
receiver=request.user,
)
toread.save()
return render(request, 'chat/chat_friends.html')
model
class Toread(models.Model):
sender = models.ForeignKey(User, on_delete=models.CASCADE, related_name='sendToread')
receiver = models.ForeignKey(User, on_delete=models.CASCADE, related_name='receiveToread')
# ...
def __str__(self):
return str(self.receiver)
ws
class ChatConsumer(AsyncJsonWebsocketConsumer):
#staticmethod
#database_sync_to_async
def dismissToread(sender_pk, receiver_pk):
for tr in Toread.objects.filter(
sender=User.objects.get(pk=sender_pk),
receiver=User.objects.get(pk=receiver_pk),
):
tr.delete()
async def disconnect(self, close_code):
await ChatConsumer.dismissToread(
sender_pk=int(self.scope['user'].pk),
receiver_pk=int(self.room_name),
)
await self.channel_layer.group_discard(
self.room_group_name,
self.channel_name,
)
# ...

Flask-mail: How to handle multiple email requests at once

So I wrote a dedicated flask app for handling emails for my application and deployed it on heroku. In which I have set up a route to send emails:
#app.route('/send', methods=['POST'])
def send_now():
with app.app_context():
values = request.get_json()
email = values['email']
code = values['code']
secret_2 = str(values['secret'])
mail = Mail(app)
msg = Message("Password Recovery",sender="no*****#gmail.com",recipients=[email])
msg.html = "<h1>Your Recovery Code is: </h1><p>"+str(code)+"</p>"
if secret == secret_2:
mail.send(msg)
response = {'message': 'EmailSent'}
return jsonify(response), 201
It works fine for a single user at a time, however when multiple users send a POST request, the client user needs to wait till the POST returns a 201. Thus the wait period keeps increasing (it may not even send). So how do I handle this so accommodate multiple simultaneous users. Threads? Buffer? I have no idea
You need to send mail via Asynchronous thread calls in Python. Have a look at this code sample and implement in your code.
from threading import Thread
from app import app
def send_async_email(app, msg):
with app.app_context():
mail.send(msg)
def send_email(subject, sender, recipients, text_body, html_body):
msg = Message(subject, sender=sender, recipients=recipients)
msg.body = text_body
msg.html = html_body
thr = Thread(target=send_async_email, args=[app, msg])
thr.start()
This will allow to send the mail in background.

Django system check stuck on unreachable url

In my project I have requests library that sends POST request. Url for that request is hardcoded in function, which is accessed from views.py.
The problem is that when I dont have internet connection, or host, on which url is pointing, is down, I cant launch developer server, it gets stuck on Performing system check. However, if I comment the line with url, or change it to guarantee working host, check is going well.
What is good workaround here ?
views.py
def index(request):
s = Sync()
s.do()
return HttpResponse("Hello, world. You're at the polls index.")
sync.py
class Sync:
def do(self):
reservations = Reservation.objects.filter(is_synced=False)
for reservation in reservations:
serializer = ReservationPKSerializer(reservation)
dictionary = {'url': 'url', 'hash': 'hash', 'json': serializer.data}
encoded_data = json.dumps(dictionary)
r = requests.post('http://gservice.ca29983.tmweb.ru/gdocs/do.php', headers={'Content-Type': 'application/json'}, data=encoded_data)
if r.status_code == 200:
reservation.is_synced = True
reservation.save()
It might appear to be stuck because requests automatically retries the connection a few times. Try reducing the retry count to 0 or 1 with:
Can I set max_retries for requests.request?

How to do HTTP long polling with Django Channels

I'm trying to implement HTTP long polling for a web request, but can't seem to find a suitable example in the Channels documentation, everything is about Web Sockets.
What I need to do when consuming the HTTP message is either:
wait for a message on a Group that will be sent when a certain model is saved (using signals probably)
wait for a timeout, if no message is received
and then return something to the client.
Right now I have the code that can be seen in the examples:
def http_consumer(message):
# Make standard HTTP response - access ASGI path attribute directly
response = HttpResponse("Hello world! You asked for %s" % message.content['path'])
# Encode that response into message format (ASGI)
for chunk in AsgiHandler.encode_response(response):
message.reply_channel.send(chunk)
So I have to return something in this http_consumer that will indicate that I have nothing to send, for now, but I can't block here. Maybe I can just not return anything? And then I have to catch the new message on a specific Group, or reach the timeout, and send the response to the client.
It seems that I will need to store the message.reply_channel somewhere so that I can later respond, but I'm at a loss as to how to:
catch the group message and generate the response
generate a response when no message was received (timeout), maybe the delay server can work here?
So, the way I ended up doing this is described below.
In the consumer, if I find that I have no immediate response to send, I will store the message.reply_channel on a Group that will be notified in the case of relevant events, and schedule a delayed message that will be triggered when the max time to wait is reached.
group_name = group_name_from_mac(mac_address)
Group(group_name).add(message.reply_channel)
message.channel_session['will_wait'] = True
delayed_message = {
'channel': 'long_polling_terminator',
'content': {'mac_address': mac_address,
'reply_channel': message.reply_channel.name,
'group_name': group_name},
'delay': settings.LONG_POLLING_TIMEOUT
}
Channel('asgi.delay').send(delayed_message, immediately=True)
Then, two things can happen. Either we get a message on the relevant Group and a response is sent early, or the delayed message arrives signalling that we have exhausted the time we had to wait and must return a response indicating that there were no events.
In order to trigger the message when a relevant event occurs I'm relying on Django signals:
class PortalConfig(AppConfig):
name = 'portal'
def ready(self):
from .models import STBMessage
post_save.connect(notify_new_message, sender=STBMessage)
def notify_new_message(sender, **kwargs):
mac_address = kwargs['instance'].set_top_box.id
layer = channel_layers['default']
group_name = group_name_from_mac(mac_address)
response = JsonResponse({'error': False, 'new_events': True})
group = Group(group_name)
for chunk in AsgiHandler.encode_response(response):
group.send(chunk)
When the timeout expires, I get a message on the long_polling_terminator channel and I need to send a message that indicates that there are no events:
def long_polling_terminator(message):
reply_channel = Channel(message['reply_channel'])
group_name = message['group_name']
mac_address = message['mac_address']
layer = channel_layers['default']
boxes = layer.group_channels(group_name)
if message['reply_channel'] in boxes:
response = JsonResponse({'error': False, 'new_events': False})
write_http_response(response, reply_channel)
return
The last thing to do is remove this reply_channel from the Group, and I do this in a http.disconnect consumer:
def process_disconnect(message, group_name_from_mac):
if message.channel_session.get('will_wait', False):
reply_channel = Channel(message['reply_channel'])
mac_address = message.channel_session['mac_address']
group_name = group_name_from_mac(mac_address)
Group(group_name).discard(reply_channel)

Reference function of Twisted Connection Bot Class

I am currently working on developing a Twitch.tv chat and moderation bot(full code can be found on github here: https://github.com/DarkElement75/tecsbot; might not be fully updated to match the problem I describe), and in doing this I need to have many different Twisted TCP connections to various channels. I have (because of the way Twitch's Whisper system works) one connection for sending / receiving whispers, and need the ability for any connection to any channel can reference this whisper connection and send data on this connection, via the TwitchWhisperBot's write() function. However, I have yet to find a method that allows my current global function to reference this write() function. Here is what I have right now:
#global functions
def send_whisper(self, user, msg):
whisper_str = "/w %s %s" % (user, msg)
print dir(whisper_bot)
print dir(whisper_bot.transport)
whisper_bot.write("asdf")
def whisper(self, user, msg):
'''global whisper_user, whisper_msg
if "/mods" in msg:
thread.start_new_thread(get_whisper_mods_msg, (self, user, msg))
else:
whisper_user = user
whisper_msg = msg'''
if "/mods" in msg:
thread.start_new_thread(get_whisper_mods_msg, (self, user, msg))
else:
send_whisper(self, user, msg)
#Example usage of these (inside a channel connection):
send_str = "Usage: !permit add <user> message/time/<time> <message count/time duration/time unit>/permanent"
whisper(self, user, send_str)
#Whisper classes
class TwitchWhisperBot(irc.IRCClient, object):
def write(self, msg):
self.msg(self.channel, msg.encode("utf-8"))
logging.info("{}: {}".format(self.nickname, msg))
class WhisperBotFactory(protocol.ClientFactory, object):
wait_time = 1
def __init__(self, channel):
global whisper_bot
self.channel = channel
whisper_bot = TwitchWhisperBot(self.channel)
def buildProtocol(self, addr):
return TwitchWhisperBot(self.channel)
def clientConnectionLost(self, connector, reason):
# Reconnect when disconnected
logging.error("Lost connection, reconnecting")
self.protocol = TwitchWhisperBot
connector.connect()
def clientConnectionFailed(self, connector, reason):
# Keep retrying when connection fails
msg = "Could not connect, retrying in {}s"
logging.warning(msg.format(self.wait_time))
time.sleep(self.wait_time)
self.wait_time = min(512, self.wait_time * 2)
connector.connect()
#Execution begins here:
#main whisper bot where other threads with processes will be started
#sets variables for connection to twitch chat
whisper_channel = '#_tecsbot_1444071429976'
whisper_channel_parsed = whisper_channel.replace("#", "")
server_json = get_json_servers()
server_arr = (server_json["servers"][0]).split(":")
server = server_arr[0]
port = int(server_arr[1])
#try:
# we are using this to make more connections, better than threading
# Make logging format prettier
logging.basicConfig(format="[%(asctime)s] %(message)s",
datefmt="%H:%M:%S",
level=logging.INFO)
# Connect to Twitch IRC server, make more instances for more connections
#Whisper connection
whisper_bot = ''
reactor.connectTCP(server, port, WhisperBotFactory(whisper_channel))
#Channel connections
reactor.connectTCP('irc.twitch.tv', 6667, BotFactory("#darkelement75"))
The simplest solution here would be using the Singleton pattern, since you're guaranteed to only have a single connection of each type at any given time. Personally, with Twisted, I think the simplest solution is to use the reactor to store your instance (since reactor itself is a singleton).
So what you want to do is, inside TwitchWhisperBot, at sign in:
def signedOn(self):
reactor.whisper_instance = self
And then, anywhere else in the code, you can access that instance:
reactor.whisper_instance = self
Just for sanity's sake, you should also check if it's been set or not:
if getattr(reactor, 'whisper_instance'):
reactor.whisper_instance.write("test_user", "message")
else:
logging.warning("whisper instance not set")