gevent-socketio: browser receiving another browser's response - django

I've been trying this for a couple of days now but I can't seem to wrap my head around this. My problem essentially lies with the fact if I have 2 browsers requesting at the same time, my server-side socketio response will return the wrong results to the wrong browser requesting (the results get swapped). I think my problem is that I don't know how does socket.io determines which browser to return the results to. The current code has lotsa moving parts and it'll be a pain to strip to manner that people can find meaningful so instead, I think I will be able to resolve my bug if someone can help me work through and understand the django_chat example found at https://github.com/abourget/gevent-socketio/tree/master/examples/django_chat. So here goes:
So sequentially, when a user enters something into chat, this code fires
$('#send-message').submit(function () {
message('me', $('#message').val());
socket.emit('user message', $('#message').val());
clear();
$('#lines').get(0).scrollTop = 10000000;
return false;
});
The socket.emit function then triggers this function in the ChatNameSpace class:
def on_user_message(self, msg):
self.log('User message: {0}'.format(msg))
self.emit_to_room(self.room, 'msg_to_room',
self.socket.session['nickname'], msg)
return True
Which in turn calls this emit_to_room function found in the RoomsMixin class
def emit_to_room(self, room, event, *args):
"""This is sent to all in the room (in this particular Namespace)"""
pkt = dict(type="event",
name=event,
args=args,
endpoint=self.ns_name)
room_name = self._get_room_name(room)
for sessid, socket in self.socket.server.sockets.iteritems():
if 'rooms' not in socket.session:
continue
if room_name in socket.session['rooms'] and self.socket != socket:
socket.send_packet(pkt)
I understand that when a user joins a chat room, the [rooms] session gets updated with the chatroom he belongs to. It looks something like ['/chat_1', '/chat_2'] where the numbers signify the primary key of the room object.
This is where I get lost. Where does this distinction of specific chatrooms meet the frontend js code? How does the emit function know where to send the response to which room?

Related

How to start a new request after the item_scraped scrapy signal is called?

I need to scrap the data of each item from a website using Scrapy(http://example.com/itemview). I have a list of itemID and I need to pass it in a form in example.com.
There is no url change for each item. So for each request in my spider the url will always be the same. But the content will be different.
I don't wan't a for loop for handling each request. So i followed the below mentioned steps.
started spider with the above url
added item_scraped and spider_closed signals
passed through several functions
passed the scraped data to pipeline
trigerred the item_scraped signal
After this it automatically calls the spider_closed signal. But I want the above steps to be continued till the total itemID are finished.
class ExampleSpider(scrapy.Spider):
name = "example"
allowed_domains = ["example.com"]
itemIDs = [11111,22222,33333]
current_item_num = 0
def __init__(self, itemids=None, *args, **kwargs):
super(ExampleSpider, self).__init__(*args, **kwargs)
dispatcher.connect(self.item_scraped, signals.item_scraped)
dispatcher.connect(self.spider_closed, signals.spider_closed)
def spider_closed(self, spider):
self.driver.quit()
def start_requests(self):
request = self.make_requests_from_url('http://example.com/itemview')
yield request
def parse(self,response):
self.driver = webdriver.PhantomJS()
self.driver.get(response.url)
first_data = self.driver.find_element_by_xpath('//div[#id="itemview"]').text.strip()
yield Request(response.url,meta={'first_data':first_data},callback=self.processDetails,dont_filter=True)
def processDetails(self,response):
itemID = self.itemIDs[self.current_item_num]
..form submission with the current itemID goes here...
...the content of the page is updated with the given itemID...
yield Request(response.url,meta={'first_data':response.meta['first_data']},callback=self.processData,dont_filter=True)
def processData(self,response):
...some more scraping goes here...
item = ExamplecrawlerItem()
item['first_data'] = response.meta['first_data']
yield item
def item_scraped(self,item,response,spider):
self.current_item_num += 1
#i need to call the processDetails function here for the next itemID
#and the process needs to contine till the itemID finishes
self.parse(response)
My piepline:
class ExampleDBPipeline(object):
def process_item(self, item, spider):
MYCOLLECTION.insert(dict(item))
return
I wish I had an elegant solution to this. But instead it's a hackish way of calling the underlying classes.
self.crawler.engine.slot.scheduler.enqueue_request(scrapy.Request(url,self.yourCallBack))
However, you can yield a request after you yield the item and have it callback to self.processDetails. Simply add this to your processData function:
yield item
self.counter += 1
yield scrapy.Request(response.url,callback=self.processDetails,dont_filter=True, meta = {"your":"Dictionary"}
Also, PhantomJS can be nice and make your life easy, but it is slower than regular connections. If possible, find the request for json data or whatever makes the page unparseable without JS. To do so, open up chrome, right click, click inspect, go to the network tab, then enter the ID into the form, then look at the XHR or JS tabs for a JSON that has the data or next url you want. Most of the time, there will be some url made by adding the ID, if you can find it, you can just concatenate your urls and call that directly without having the cost of JS rendering. Sometimes it is randomized, or not there, but I've had fair success with it. You can then also use that to yield many requests at the same time without having to worry about phantomJS trying to do two things at once or having to initialize many instances of it. You could use tabs, but that is a pain.
Also, I would use a Queue of your IDs to ensure thread safety. Otherwise, you could have processDetails called twice on the same ID, though in the logic of your program everything seems to go linearly, which means you aren't using the concurrency capabilities of Scrapy and your program will go more slowly. To use Queue add:
import Queue
#go inside class definition and add
itemIDQueue = Queue.Queue()
#within __init__ add
[self.itemIDQueue.put(ID) for ID in self.itemID]
#within processDetails replace itemID = self.itemIDs[self.current_item_num] with
itemID = self.itemIDQueue.get()
And then there is no need to increment the counter and your program is thread safe.

Custom Django signal receiver getting data

I'm very new to programming and especially to Django but can't work out how to use any previous answers to my advantage....
Apologies if my question is too vague but essentially, I have two different apps, let's call them app A and app B, with data on two different databases but apps contain information on the same individual item.
I want to edit this information on my 'edit details' page while keeping the apps as separate as possible (well AppB can know about functions in AppA but not vice-versa)...I guess what I really want is a signal which works like so:
A 'submit' view within AppA which is called when I submit changes to the data (using text boxes). The data for AppA is then saved..
And a signal then sent to AppB which ideally would update its data, before the HttpResponseRedirect is performed.
Unfortunately I can't really get this to work. My problem is that if I put 'request' into the arguments for save_details, I get errors like "save_details() takes exactly 3 arguments (2 given)"....does anyone know a clever way of getting something like this to work?
My submit function in AppA looks something like this...
def submit(self, request, id):
signal_received.send(sender=self, id=id)
q = get_object_or_404(AppA, pk=id)
q.blah = request.POST.get('wibble from the form')
...
return Http.....
in my AppB signals.py file, I have put.
signal_received = django.dispatch.Signal(providing_args=['id'])
def save_details(sender, uid, **kwargs):
p = AppB.objects.get(id=id)
p.wobble = request.POST.get('wobble from the form')
...
signal.received.connect(save_details)
Obviously the above doesn't mention request in its arguments which seems to be necessary but if I add that, I get problems with the number of arguments.
(I have imported all the right things at the top of each file I think...hence me leaving that off.)
Any point about the above would be appreciated....e.g. does "request" need to be the first argument? It didn't seem to like me using "self" before but I have tried to copy as much as possible the example at the bottom of the documentation (https://docs.djangoproject.com/en/dev/topics/signals/) but the extra functionality I need in the signal receiving function is flumoxing me.
Thanks in advance...

Getting information from Django custom signal receiver

This is my second question today actually but what I want to know...Is it possible to retrieve information from a signal handler.
I have a list of items, call it list and each item is in AppA. Each item has a couple of characteristics which are saved in a different app, AppB.
So, I figured that I could maybe create a dictionary, dict and iterate over the items in list. In each iteration, I was hoping to send a signal to AppB and retrieve the information, i.e. have something like
def blob(request):
dict = {}
for item in list:
signal.send(sender=None, id=item.id)
dict[item] = (char1, char2)
...some html request
My signal handler looks something like this:
def handler(sender, id, **kwargs):
model2 = Model2.objects.get(id=id)
a = model2.char1
b = model2.char2
return (a, b)
Then I was hoping to be able to just produce a list of the items and their characteristics in the webpage...THe problem is that obviously the signal sender has to send the signal, and get the information back which I want....is that even possible :S?
Currently, I get an error saying "global name 'char1' is not defined....and I have imported the handlers and signals into the view.py where blob resides....so is my problem just unsolvable? / Should it clearly be solved in another way? Or have I almost certainly made a stupid error with importing stuff?
This wasn't actually so tricky. Thought I should perhaps post how it was solved. In my views, I actually wrote
response_list=signal.send(sender=None, list=list_of_items)
I then iterated over my response_list, adding the items to a fresh list like so:
snippets = []
for response in response_list:
logger.error(response)
snippets.append(response[1])
And could then call the responses in snippets like a dictionary in my template. When I asked the question, I didn't appreciate that I could equate something with the signal sending...

In Django, how to find out if a request has been canceled?

I have a view in Django which streams a response. (Think of a web-based chat circa 1999, or the comet technique.)
def events(request):
def generate_events():
for i in range(10):
time.sleep(2)
yield " " * 1024
yield "This is some text.\n"
return HttpResponse(generate_events())
Now, I'd like to detect when the user cancels the loading of the page, since there is no point in sending more data. Ideally, there would be something like:
if not request.is_alive():
return
Is there a way to achieve this in Django?
I really don't think you can do that from the server-side. But I'm sure you could use JavaScript to get some decent results. The "stream" will die when the JS stops asking for stuff from the server like in the case of a cancelled request.

Django - show loading message during long processing

How can I show a please wait loading message from a django view?
I have a Django view that takes significant time to perform calculations on a large dataset.
While the process loads, I would like to present the user with a feedback message e.g.: spinning loading animated gif or similar.
After trying the two different approaches suggested by Brandon and Murat, Brandon's suggestion proved the most successful.
Create a wrapper template that includes the javascript from http://djangosnippets.org/snippets/679/. The javascript has been modified: (i) to work without a form (ii) to hide the progress bar / display results when a 'done' flag is returned (iii) with the JSON update url pointing to the view described below
Move the slow loading function to a thread. This thread will be passed a cache key and will be responsible for updating the cache with progress status and then its results. The thread renders the original template as a string and saves it to the cache.
Create a view based on upload_progress from http://djangosnippets.org/snippets/678/ modified to (i) instead render the original wrapper template if progress_id='' (ii) generate the cache_key, check if a cache already exists and if not start a new thread (iii) monitor the progress of the thread and when done, pass the results to the wrapper template
The wrapper template displays the results via document.getElementById('main').innerHTML=data.result
(* looking at whether step 4 might be better implemented via a redirect as the rendered template contains javascript that is not currently run by document.getElementById('main').innerHTML=data.result)
Another thing you could do is add a javascript function that displays a loading image before it actually calls the Django View.
function showLoaderOnClick(url) {
showLoader();
window.location=url;
}
function showLoader(){
$('body').append('<div style="" id="loadingDiv"><div class="loader">Loading...</div></div>');
}
And then in your template you can do:
This will take some time...
Here's a quick default loadingDiv : https://stackoverflow.com/a/41730965/13476073
Note that this requires jQuery.
a more straightforward approach is to generate a wait page with your gif etc. and then use the javascript
window.location.href = 'insert results view here';
to switch to the results view which starts your lengthy calculation. The page wont change until the calculation is finished. When it finishes, then the results page will be rendered.
Here's an oldie, but might get you going in the right direction: http://djangosnippets.org/snippets/679/
A workaround that I chose was to use beforunload and unload events to show the loading image. This can be used with or without window.load. In my case, it's the view that is taking a great amount of time and not the page loading, hence I am not using window.load (because it's already a lot of time by the time window.load comes into picture, and at that point of time, I do not need the loading icon to be shown anymore).
The downside is that there is a false message that goes out to the user that the page is loading even when when the request has not even reached the server or it's taking much time. Also, it doesn't work for requests coming from outside my website. But I'm living with this for now.
Update: Sorry for not adding code snippet earlier, thanks #blockhead. The following is a quick and dirty mix of normal JS and JQuery that I have in the master template.
Update 2: I later moved to making my view(s) lightweight which send the crucial part of the page quickly, and then using ajax to get the remaining content while showing the loading icon. It needed quite some work, but the end result is worth it.
window.onload=function(){
$("#load-icon").hide(); // I needed the loading icon to hide once the page loads
}
var onBeforeUnLoadEvent = false;
window.onunload = window.onbeforeunload= function(){
if(!onBeforeUnLoadEvent){ // for avoiding dual calls in browsers that support both events
onBeforeUnLoadEvent = true;
$("#load-icon").show();
setTimeout(function(){
$("#load-icon").hide();},5000); // hiding the loading icon in any case after
// 5 seconds (remove if you do not want it)
}
};
P.S. I cannot comment yet hence posted this as an answer.
Iterating HttpResponse
https://stackoverflow.com/a/1371061/198062
Edit:
I found an example to sending big files with django: http://djangosnippets.org/snippets/365/ Then I look at FileWrapper class(django.core.servers.basehttp):
class FileWrapper(object):
"""Wrapper to convert file-like objects to iterables"""
def __init__(self, filelike, blksize=8192):
self.filelike = filelike
self.blksize = blksize
if hasattr(filelike,'close'):
self.close = filelike.close
def __getitem__(self,key):
data = self.filelike.read(self.blksize)
if data:
return data
raise IndexError
def __iter__(self):
return self
def next(self):
data = self.filelike.read(self.blksize)
if data:
return data
raise StopIteration
I think we can make a iterable class like this
class FlushContent(object):
def __init__(self):
# some initialization code
def __getitem__(self,key):
# send a part of html
def __iter__(self):
return self
def next(self):
# do some work
# return some html code
if finished:
raise StopIteration
then in views.py
def long_work(request):
flushcontent = FlushContent()
return HttpResponse(flushcontent)
Edit:
Example code, still not working:
class FlushContent(object):
def __init__(self):
self.stop_index=2
self.index=0
def __getitem__(self,key):
pass
def __iter__(self):
return self
def next(self):
if self.index==0:
html="loading"
elif self.index==1:
import time
time.sleep(5)
html="finished loading"
self.index+=1
if self.index>self.stop_index:
raise StopIteration
return html
Here is another explanation on how to get a loading message for long loading Django views
Views that do a lot of processing (e.g. complex queries with many objects, accessing 3rd party APIs) can take quite some time before the page is loaded and shown to the user in the browser. What happens is that all that processing is done on the server and Django is not able to serve the page before it is completed.
The only way to show a show a loading message (e.g. a spinner gif) during the processing is to break up the current view into two views:
First view renders the page with no processing and with the loading message
The page includes a AJAX call to the 2nd view that does the actual processing. The result of the processing is displayed on the page once its done with AJAX / JavaScript