Can a multipart/x-mixed-replace stream be played in a webview inside javafx? - flask

I am using a flask server to stream video from a webcam to a java client Here is the Flask implementation:
def generate():
# grab global references to the output frame and lock variables
global outputFrame, lock
# loop over frames from the output stream
while True:
# wait until the lock is acquired
with lock:
# check if the output frame is available, otherwise skip
# the iteration of the loop
if outputFrame is None:
continue
# encode the frame in JPEG format
(flag, encodedImage) = cv2.imencode(".jpg", outputFrame)
# ensure the frame was successfully encoded
if not flag:
continue
# yield the output frame in the byte format
yield(b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' +
bytearray(encodedImage) + b'\r\n')
#app.route("/video_feed")
def video_feed():
# return the response generated along with the specific media
# type (mime type)
return Response(generate(),
mimetype = "multipart/x-mixed-replace; boundary=frame")
I have a javafx webview screen, but it doesnt show anything. Is it because Webview does not support multipart/x-mixed-replace? Whats the solution for this? Is there any other option to get a python video stream onto a JavaFX application?

Related

Special character encoding added - PDF Django

I have a function to create a simple PDF. But when working on special characters, it returns something like that. How do I correctly save characters such as śćźż in my pdf file?
I tried to change the font type using setFont (Helvetica, TimesRoman) according this doc but I was not able to get the expected results.
Views.py (oficial doc)
def some_view_aa(request):
# Create a file-like buffer to receive PDF data.
buffer = io.BytesIO()
# Create the PDF object, using the buffer as its "file."
p = canvas.Canvas(buffer)
# Draw things on the PDF. Here's where the PDF generation happens.
# See the ReportLab documentation for the full list of functionality.
p.drawString(100, 100, "Hello AZX AĄĄŻĄ world.")
# Close the PDF object cleanly, and we're done.
p.showPage()
p.save()
# FileResponse sets the Content-Disposition header so that browsers
# present the option to save the file.
buffer.seek(0)
return FileResponse(buffer, as_attachment=True, filename='hello.pdf')

Flask: change content size

I have a Flask route which returns a video feed. I would like to be able to change the video frame size. How can I do this?
def gen(stream):
while True:
try:
frame = stream.get_last()
if frame is not None:
yield (b'--frame\r\n'
b'Pragma-directive: no-cache\r\n'
b'Cache-directive: no-cache\r\n'
b'Cache-control: no-cache\r\n'
b'Pragma: no-cache\r\n'
b'Expires: 0\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
except Exception as exception:
# Output unexpected Exceptions.
logging.error("Error occurred", exc_info=True)
#app.route('/video')
def video_feed():
return Response(gen(RedisImageStream(conn, args)),
mimetype='multipart/x-mixed-replace; boundary=frame')
If you need to change the size of the image only on the viewport, you may be able to edit your code that displays the image. If this is a website, maybe you can use some CSS configuration.
If you really need to change the size of the images you send out from your server, you will need to load each image into memory, then apply the conversion you want, and then re-encode it as JPEG. This is computationally expensive, and this is one the main sources of latency in video streaming; in fact, the main reason the streaming service of YouTube and Twitch and the usual suspects is expensive to run is because they need to re-encode the incoming video into many resolutions and send it out in real time.
For your case of Python and JPEG images, you can use PIL / Pillow. Here's an example:
import io
import PIL
def downscale(image, size):
'''
Accept a JPEG binary representation of an image,
and return the JPEG of a smaller version of the image
that has the same aspect ratio and is not larger than size.
'''
fp = io.BytesIO(image) # create a file-like object from the supplied buffer
im = PIL.Image(fp)
im_downscale = im.thumbnail(size) # Image.thumbnail creates a smaller version of the image no larger than size.
# If this is not what you want, take a look at Image.transform
outp = io.BytesIO() # create empty buffer for output
im_downscale.save(outp, "JPEG")
bytestring = outp.getvalue()
return bytestring
Then, before your yield line, call:
frame = downscale(frame, (400, 300))

How to stream video to more than one client using Django development server

I am trying to stream video to multiple browsers using opencv and django on a raspberry pi. In the code I share below, I am able to see my video stream on a different computer which is great, but when another computer tries to access the stream it just gets a blank screen. If I close the stream on the first computer, the second computer will now be able to access it.
So my question is, is this due to my code, or is this due to a limitation of the django development server, and I need to use gunicorn/nginix or similar production level?
I am hoping I am just missing something in the code...
#views.py
class VideoCamera(object):
def __init__(self):
self.video = cv2.VideoCapture(0)
def __del__(self):
self.video.release()
def get_frame(self):
ret,image = self.video.read()
ret,jpeg = cv2.imencode('.jpg',image)
return jpeg.tobytes()
def gen(camera):
while True:
frame = camera.get_frame()
yield(b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
#gzip.gzip_page
def videoStream(request):
try:
return StreamingHttpResponse(gen(VideoCamera()),content_type="multipart/x-mixed- replace;boundary=frame")
except HttpResponseServerError as e:
print("aborted")
Then my HTML is very simple for now:
<img id="image" src = "http://127.0.0.0:8000/videostream/">
If I remember correctly, you can't capture one camera twice. Second request may have a problem capturing already captured camera.
You may try creating second process capturing video into some buffer like Redis and having django views read data from it. Something like in this answer

Streaming audio to DialogFlow for real-time intent recognition

I'm trying to stream audio from a (Pepper robot) microphone to DialogFlow. I have working code for sending a block of audio. When I send the request, the response contains the message None Exception iterating requests!. I've seen this error previously when I was reading from an audio file. However, I fail to see what's wrong with the data I'm passing now.
processRemote is called whenever the microphone records something. When writing the sound_data[0].tostring() to a StringIO and later retrieving it in chunks of 4096 bytes, the solution works.
self.processing_queue is supposed to hold a few chunks of audio that should be processed before working on new audio.
The error occurs in the response for self.session_client.streaming_detect_intent(requests).
I'm thankful for any idea.
def processRemote(self, nbOfChannels, nbOfSamplesByChannel, timeStamp, inputBuffer):
"""audio stream callback method with simple silence detection"""
sound_data_interlaced = np.fromstring(str(inputBuffer), dtype=np.int16)
sound_data = np.reshape(sound_data_interlaced,
(nbOfChannels, nbOfSamplesByChannel), 'F')
peak_value = np.max(sound_data)
chunk = sound_data[0].tostring()
self.processing_queue.append(chunk)
if self.is_active:
# detect sound
if peak_value > 6000:
print("Peak:", peak_value)
if not self.recordingInProgress:
self.startRecording()
# if recording is in progress we send directly to google
try:
if self.recordingInProgress:
print("preparing request proc remote")
requests = [dialogflow.types.StreamingDetectIntentRequest(input_audio=chunk)]
print("should send now")
responses = self.session_client.streaming_detect_intent(requests)
for response in responses:
print("checking response")
if len(response.fulfillment_text) != 0:
print("response not empty")
self.stopRecording(response) # stop if we already know the intent
except Exception as e:
print(e)
def startRecording(self):
"""init a in memory file object and save the last raw sound buffer to it."""
# session path setup
self.session_path = self.session_client.session_path(DIALOG_FLOW_GCP_PROJECT_ID, self.uuid)
self.recordingInProgress = True
requests = list()
# set up streaming
print("start streaming")
q_input = dialogflow.types.QueryInput(audio_config=self.audio_config)
req = dialogflow.types.StreamingDetectIntentRequest(
session=self.session_path, query_input=q_input)
requests.append(req)
# process pre-recorded audio
print("work on stored audio")
for chunk in self.processing_queue:
print("appending chunk")
try:
requests.append(dialogflow.types.StreamingDetectIntentRequest(input_audio=chunk))
except Exception as e:
print(e)
print("getting response")
responses = self.session_client.streaming_detect_intent(requests)
print("got response")
print(responses)
# iterate though responses from pre-recorded audio
try:
for response in responses:
print("checking response")
if len(response.fulfillment_text) != 0:
print("response not empty")
self.stopRecording(response) # stop if we already know the intent
except Exception as e:
print(e)
# otherwise continue listening
print("start recording (live)")
def stopRecording(self, query_result):
"""saves the recording to memory"""
# stop recording
self.recordingInProgress = False
self.disable_google_speech(force=True)
print("stopped recording")
# process response
action = query_result.action
text = query_result.fulfillment_text.encode("utf-8")
if (action is not None) or (text is not None):
if len(text) != 0:
self.speech.say(text)
if len(action) != 0:
parameters = query_result.parameters
self.execute_action(action, parameters)
As per the source code the session_client.streaming_detect_intent function expects an iterable as its argument. But you are currently giving it a list of requests.
Won't work:
requests = [dialogflow.types.StreamingDetectIntentRequest(input_audio=chunk)]
responses = self.session_client.streaming_detect_intent(requests)
#None Exception iterating requests!
Alternatives:
# wrap the list in an iterator
requests = [dialogflow.types.StreamingDetectIntentRequest(input_audio=chunk)]
responses = self.session_client.streaming_detect_intent(iter(requests))
# Note: The example in the source code calls the function like this
# but this gave me the same error
requests = [dialogflow.types.StreamingDetectIntentRequest(input_audio=chunk)]
for response in self.session_client.streaming_detect_intent(requests):
# process response
Using generator structure
While this fixed the error, the intent detection still didn't work. I believe a better program structure is to use a generator, as suggested in the docs. Something like (pseudo-code):
def dialogflow_mic_stream_generator():
# open stream
audio_stream = ...
# send configuration request
query_input = dialogflow.types.QueryInput(audio_config=audio_config)
yield dialogflow.types.StreamingDetectIntentRequest(session=session_path,
query_input=query_input)
# output audio data from stream
while audio_stream_is_active:
chunk = audio_stream.read(chunk_size)
yield dialogflow.types.StreamingDetectIntentRequest(input_audio=chunk)
requests = dialogflow_mic_stream_generator()
responses = session_client.streaming_detect_intent(requests)
for response in responses:
# process response

Returning data to the original process that invoke a subprocess

Someone told me to post this as a new question. This is a follow up to
Instantiating a new WX Python GUI from spawn thread
I implemented the following code to a script that gets called from a spawned thread (Thread2)
# Function that gets invoked by Thread #2
def scriptFunction():
# Code to instantiate GUI2; GUI2 contains wx.TextCtrl fields and a 'Done' button
p = subprocess.Popen("python secondGui.py", bufsize=2048, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Wait for a response
p.wait()
# Read response
response = p.stdout.read()
# Process entered data
processData()
On the new process running GUI2, I want the 'Done' button event handler to return 4 data sets to Thread2, and then destroy itself (GUI2)
def onDone(self,event):
# This is the part I need help with; Trying to return data back to main process that instantiated this GUI (GUI2)
process = subprocess.Popen(['python', 'MainGui.py'], shell=False, stdout=subprocess.PIPE)
print process.communicate('input1', 'input2', 'input3', 'input4')
# kill GUI
self.Close()
Currently, this implementation spawns another Main GUI in a new process. What I want to do is return data back to the original process. Thanks.
Do the two scripts have to be separate? I mean, you can have multiple frames running on one main loop and transfer information between the two using pubsub: http://www.blog.pythonlibrary.org/2010/06/27/wxpython-and-pubsub-a-simple-tutorial/
Theoretically, what you're doing should work too. Other methods I've heard of involve using Python's socket server library to create a really simple server that runs that the two programs can post to and read data from. Or a database or watching a directory for file updates.
Function that gets invoked by Thread #2
def scriptFunction():
# Code to instantiate GUI2; GUI2 contains wx.TextCtrl fields and a 'Done' button
p = subprocess.Popen("python secondGui.py", bufsize=2048, shell=True,stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Wait for a response
p.wait()
# Read response and split the return string that contains 4 word separated by a comma
responseArray = string.split(p.stdout.read(), ",")
# Process entered data
processData(responseArray)
'Done' button event handler that gets invoked when the 'Done' button is clicked on GUI2
def onDone(self,event):
# Package 4 word inputs into string to return back to main process (Thread2)
sys.stdout.write("%s,%s,%s,%s" % (dataInput1, dataInput2, dataInput3, dataInput4))
# kill GUI2
self.Close()
Thanks for your help Mike!