AssertionError when reloading page - django

I am writing a human detection live streaming with Django + YOLOv5.
Firstly I import the video source form rtsp, then detect with it run() function, then yield frame by frame. To stream, I use StreamingHttpResponse with streaming_content=run().
It seems to work fine, but when I reload the streaming page, maybe the run() is called again, if I reload too much, the fps decreases then the stream stops, with an AssertionError: cannot open rtsp...
I've tried a solution, to use iframe on front-end, but every time front-end shows the stream, it calls StreamingHttpRespone and run() again.
Do you have any solution for it?
def video_feed(request):
return StreamingHttpResponse(streaming_content=run(), content_type='multipart/x-mixed-replace; boundary=frame')

Related

display live camera feed on a webpage with python

So I'm trying to display a preview of human detection done on raspberry pi on a webpage.
I already saw, and tried this proposed solution. My issue with it is that the processing is done only when the page is viewed, for obvious reasons. I want the processing to happen independent on whether the preview is active, and when the page is viewed for it to be simply attached "on top".
Having a separate thread for processing looks like a probable solution, but due to flask's event driven approach I'm struggling to figure out how to safely pass the frames between threads (pre processing takes reasonable time, and if i simply use locks to guard, sometimes times it raises the exceptions), and generally am not sure if that the best way to solve the problem.
Is multithreading the way to go? Or should I maybe choose some other library other then flask for that purpose?
From the example you posted, you can use the VideoCamera class but split get_frame into two functions. One that retrieves the frame and processes it (update_frame), then another that returns the latest frame in the encoding you need for flask (get_frame). Simply run update_frame in a separate thread and that should work.
Its probably best practice to store the new frame in a local variable first, then use a lock to read/write to the instance variable you are storing the latest frame in. But I'll spare the example code that implementation.
class VideoCamera(object):
def __init__(self):
self.video = cv2.VideoCapture(0)
self._last_frame = None
def __del__(self):
self.video.release()
def update_frame(self):
ret, frame = self.video.read()
# DO WHAT YOU WANT WITH TENSORFLOW / KERAS AND OPENCV
# Perform mutual exclusion here
self._last_frame = frame
def get_frame():
# Perform mutual exclusion here
frame = self._last_frame
ret, jpeg = cv2.imencode('.jpg', frame)
return jpeg.tobytes()

wait for an asyn_task to complete or complete it in background

I have some functions in my Django application, that takes lot of time (scraping using proxies), it takes sometimes more than 30sec and is killed by gunicorn and AWS server due to timeout. and I don't want to increase the timeout value.
A solution that came to my mind is to run these functions as a async_task using django-q module.
Is it possible to do the following:
when a view calls the long function, Run the function in async way
if it returns a result in a pre-defined amount of time return the result to user.
If not, return an incomplete result and continue the function in the background (The function changes a model in the database) no need to notify the user of changes.

Custom Media Foundation sink never receives samples

I have my own MediaSink in Windows Media Foundation with one stream. In the OnClockStart method, I instruct the stream to queue (i) MEStreamStarted and (ii) MEStreamSinkRequestSample on itself. For implementing the queue, I use the IMFMediaEventQueue, and using the mtrace tool, I can also see that someone dequeues the event.
The problem is that ProcessSample of my stream is actually never called. This also has the effect that no further samples are requested, because this is done after processing a sample like in https://github.com/Microsoft/Windows-classic-samples/tree/master/Samples/DX11VideoRenderer.
Is the described approach the right way to implement the sink? If not, what would be the right way? If so, where could I search for the problem?
Some background info: The sink is an RTSP sink based on live555. Since the latter is also sink-driven, I thought it would be a good idea queuing a MEStreamSinkRequestSample whenever live555 requests more data from me. This is working as intended.
However, the solution has the problem that new samples are only requested as long as a client is connected to live555. If I now add a tee before the sink, eg to show a local preview, the system gets out of control, because the tee accumulates samples on the output of my sink which are never fetched. I then started playing around with discardable samples (cf. https://social.msdn.microsoft.com/Forums/sharepoint/en-US/5065a7cd-3c63-43e8-8f70-be777c89b38e/mixing-rate-sink-and-rateless-sink-on-a-tee-node?forum=mediafoundationdevelopment), but the problem is either that the stream does not start, queues are growing or the frame rate of the faster sink is artificially limited depending on which side is discardable.
Therefore, the next idea was rewriting my sink such that it always requests a new sample when it has processed the current one and puts all samples in a ring buffer for live555 such that whenever clients are connected, they can retrieve their data from there, and otherwise, the samples are just discarded. This does not work at all. Now, my sink does not get anything even without the tee.
The observation is: if I just request a lot of samples (as in the original approach), at some point, I get data. However, if I request only one (I also tried moderately larger numbers up to 5), ProcessSample is just not called, so no subsequent requests can be generated. I send MeStreamStarted once the clock is started or restarted exactly as described on https://msdn.microsoft.com/en-us/library/windows/desktop/ms701626, and after that, I request the first sample. In my understanding, MEStreamSinkRequestSample should not get lost, so I should get something even on a single request. Is that a misunderstanding? Should I request until I get something?

GetLastInputInfo does not correctly work?

I used GetLastInputInfo for check last input info from mouse and keyboard.
On my system on PC is working correctly, but when I run my program on my laptop it does not working.
I see that LASTINPUTINFO changing every 10-15 sec.
Now, I am writing example program for check all input from mouse and keyboard and save last input time from this device but this time not changing if I idle.
How can I check who is generate Activity (device/program) and change struct LASTINPUTINFO?
You can use Raw Input to see if the activity is coming from the actual mouse/keyboard itself. If it is, you might have a faulty device driver, or a driver that is running some kind of internal timer to generate a steady flow of input events.
If GetLastInputInfo() updates without Raw activity being reported, than a running app is most likely using an input injection API like mouse_event(), keybd_event(), or SendInput(). You would have to hook those directly to find out which app is calling them.

How do I detect an aborted connection in Django?

I have a Django view that does some pretty heavy processing and takes around 20-30 seconds to return a result.
Sometimes the user will end up closing the browser window (terminating the connection) before the request completes -- in that case, I'd like to be able to detect this and stop working. The work I do is read-only on the database so there isn't any issue with transactions.
In PHP the connection_aborted function does exactly this. Is this functionality available in Django?
Here's example code I'd like to write:
def myview(request):
while not connection_aborted():
# do another bit of work...
if work_complete:
return HttpResponse('results go here')
Thanks.
I don't think Django provides it because it basically can't. More than Django itself, this depends on the way Django interfaces with your web server. All this depends on your software stack (which you have not specified). I don't think it's even part of the FastCGI and WSGI protocols!
Edit: I'm also pretty sure that Django does not start sending any data to the client until your view finishes execution, so it can't possibly know if the connection is dead. The underlying socket won't trigger an error unless the server tries to send some data back to the user.
That connection_aborted method in PHP doesn't do what you think it does. It will tell you if the client disconnected but only if the buffer has been flushed, i.e. some sort of response is sent from the server back to the client. The PHP versions wouldn't even work as you've written if above. You'd have to add a call to something like flush within your loop to have the server attempt to send data.
HTTP is a stateless protocol. It's designed to not have either the client or the server dependent on each other. As a result the state of either is only known when there is a connection is created, and that only occurs when there's some data to send one way or another.
Your best bet is to do as #MattH suggested and do this through a bit of AJAX, and if you'd like you can integrate something like Node.js to make client "check-ins" during processing. How to set that up properly is beyond my area of expertise, though.
So you have an AJAX view that runs a query that takes 20-30 seconds to process requested in the background of a rendered page and you're concerned about wasted resources for when someone cancels the page load.
I see that you've got options in three broad categories:
Live with it. Improve the situation by caching the results in case the user comes back.
Make it faster. Throw more space at a time/space trade-off. Maintain intermediate tables. Precalculate the entire thing, etc.
Do something clever with the browser fast-polling a "is it ready yet?" query and the server cancelling the query if it doesn't receive a nag within interval * 2 or similar. If you're really clever, you could return progress / ETA to the nags. However, this might not have particularly useful behaviour when the system is under load or your site is being accessed over limited bandwidth.
I don't think you should go for option 3 because it's increasing complexity and resource usage for not much gain.