Video stream output via USB/TRRS port using OPENCV - c++

I have a raspberry and I process video from a camera connected via usb on it, I need to display in real time only the processed video directly through the usb/trrs port (not the entire desktop with an opencv window, but the video itself).
In the end, I just need to connect another board and it received a raspberry output at its input as if it were just a camera.
P.S. C++/python implementation doesn't matter.
P.P.S. Wireless transmission is not suitable, it is necessary that the raspberry simulate the output of the usb/ trrs like real camera

Some steps:
Connect the raspberry to the display
ctrl + alt + f1
sudo service lightdm stop
ls /dev/fb* (should be our screen's framebuffer type fb0)
and then work with opencv like this
ret, frame = cap.read()
frame32 = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA)
fbframe = cv2.resize(frame32, (1920,1080))
with open('/dev/fb0', 'rb+') as buf:
buf.write(fbframe)
thread that helped

Related

How could I handle mouse input while streaming MJPEG video to browser?

Let's suppose I'm streaming a video to a browser, for example using one of the solutions reported here:
Streaming openCV C++ video to the browser
Let's suppose I would like to select a region of the streamed video for example with a rectangle drawn with the mouse and then I want to send the coordinates of the vertexes of the rectangle to the server.
I could connect server and client with websockets, for example, but I do not know how to handle the fact that I have a streaming loop in the server that cannot be interrupted, while at the same time I have to check for mouse input on the client and then send the result of the mouse position to the server that could decide to modify the streamed output depending on the mouse position (for example I would like to apply a specific image filter to the selected rectangle).
How could I achieve this?

How to access USB camera through Qt c++ gui application

I am new to Qt GUI development. I have installed Qt GUI on windows platform. I need to setup a Qt C++ based GUI application to run connected USB camera. I tried to find out the related example but mostly based on webcam application. Is there any suggested example for accessing USB camera via Qt C++ GUI, I can go through?
There is no essential difference between "webcam" and "usb camera". A webcam is a video camera that feeds or streams an image or video in real time. This means that your USB camera can be a webcam if it supports such a function.
To accomplish your task, use the qt documentation and it's perfect examples, like this camera example.
Also, could be useful:
Recording Video from USB Cam with Qt5
Camera Overview

QT widget How to get RGB Buffer from QCamera

I am porting an video streamer application to QT (from HTML), I just had the webcam working using QT, and now I want to know How can I get the RGB Video buffer from a QT Camera? All the samples I can see is capturing image to a file.
I am using QT Widgets and not QML since its a desktop application.
What I am trying to do is to get the buffer image of the camera, compress it and send to network.
And I want to trigger this manualy since I want to call the capture of next frame when all the compression and sending is done to prevent timing issue.

How configured basler camera to don't get duplicate image

I have configured the Basler camera (aca1920-40um) which is connected to the USB port, I have duplicate frames when I use PylonViewer software and I store a sequence of still images. What parameters should I change to prevent this from happening?
The parameters that I set after connecting camera to pc are:
Enable acquisition frame rate = active.
fps = 25 (acquisition frame rate); trigger = off; Exposure auto = off; exposure time = 1000 .
In the next step, I took the frame using OpenCV and c++ with a code similar to the following link, which again gives me a duplicate frame.
Convert images from Pylon to Opencv in c++
I had the same problem and contacted Basler customer service about it. The issue you are running into is likely due to how you have the recording options set in PylonViewer.
Go to the Recording Settings and set 'Record a frame every' to 1 and select 'Frame(s)' from the drop-down list.
screenshot of pylonviewer recording settings
This worked for me. It was not at all intuitive that those setting applied to the 'Video', I thought it only related the the 'Sequence of still images' option given the layout of the UI.

Red artifact on visualizing rtsp stream via gstreamer and qt5

I've written a c++ program that receives a RTSP stream via gstreamer and displays this video via Qt5 in a QWidget. As the gstreamer videosink, I use a Widgetqt5glvideosink.
The problem is when I look at the received stream it has too much red-value in it. This only occurs when the vertical resolution exceeds +-576 pixels. (lower resolutions have no issue)
When I use cpu rendering (Widgetqt5videosink) instead of openGL rendering i get a correct image.
When I view the stream via gstreamer command line or via VLC it is also correct.
So it likes to be an issue when using an openGL rendered QWidget.
Is this an driver issue or something else?
Info:
Tested on Ubuntu16.04 and 17.04 for the viewer application.
Links:
https://gstreamer.freedesktop.org/data/doc/gstreamer/head/qt-gstreamer/html/qtvideosink_overview.html
I managed to fix my problem by patching two files in the source code of qt-gstreamer.
There were two wrong color matrices of the colorimetry BT709.
Patch to fix red artifact in Widgetqt5glvideosink