Streaming video from webcam through network with C++ application - c++

I want to create server and client applications for controlling electronic device through network.
Server application should stream video from web camera (RGB, 320 * 240) with some information about current state of device and control microcontroller device via RS-232.
Client application should allow to adjust controlling process and show video from web camera and some information about device.
What i have done: I use Qt framework in oder to create GUI of both applications and TCP sockets, also I use OpenCV to get images from camera. Server application compresses images in JPEG, add some information about device (~ 60 bytes) and send to client.
Problem: In Local network everything works fine, but working through Internet I can get only about 15 fps because JPEG images are too large. With stronger JPEG compression I can get suitable fps, but with bad image quality. So I wonder is there any better ways to stream video with some extra information about current state of device? Maybe with FFMPEG or something else.
Thanks for your replies and sorry for my english!

Related

stream audio from browser to WebRTC native C++ application

I manged to run WebRTC peerconnection example, but it is not running on the browser.
I'm trying to find a way to stream both video and audio from browser to my native program.
Is there any way?
It can be done. WebRTC is designed to work in a peer-to-peer manner between two WebRTC agents (typically a Web Browser). Your native program needs to become the second peer.
If you need to rely on open source components a good starting point is:
OpenSSL for the DTLS key exchange.
libsrtp to encrypt the RTP packets.
ffmpeg to decode the PCM audio from the browser (libvpx if you need to do video).
You'll also need to handle the ICE negotiation which requires processing STUN messages. Also extract the media payloads from the RTP packets. All these steps are also after you've determined a signalling method to exchange the SDP offer and answer between you app and the browser.
As you've probably realised starting from scratch it's a major task. There are probably some commercial libraries that will do the job and save you a lot of pain.
If that doesn't scare you and you do still want to make an attempt using open source components this example "may" help. The sample is doing the reverse of what you've asked and is sending a video stream to Chrome rather than receiving an audio stream. The useful aspect is the connection negotiation. The sample program is able to get RTP packets flowing which is often the main problem.
The example is also using Windows Media Foundation which is Windows specific. It also has lots of shortcuts particularly with the RTP and STUN packet processing.

Native WebRTC dropping frames

Summary: How do I stream high quality video using WebRTC native?
I have an h264 stream that's 1920x1080 at about 30fps. I can currently stream this from a server on localhost to a native client on localhost just fine.
I wrote a WebRTC server using Google's WebRTC native library. I've written a VideoEncoder and VideoEncoderFactory that takes frames consisting of already encoded data and and broadcasts it over a video track. Using this I can send my h264 stream to the WebRTC server over a pipe and I can see the video stream in a browser.
However, any time something moves the video gets corrupted. It continues to play but is full of artifacts. Eventually I discovered that WebRTC is dropping some of my frames. When I attach a sequentially increasing ID to each frame before I pass it to rtc::AdaptedVideoTrackSource::OnFrame and I log this same ID in webrtc::VideoEncoder::Encode I can see that some of my frames simply disappear.
This kind of makes sense, I'm trying to stream high quality video over something meant for video chat and lowing my framerate fixes the corruption. However, I'm not asking the WebRTC library to do a lot, it's just forwarding already encoded data to a client on localhost. I have a native app that does this fine and I've seen one browser WebRTC client that can do this. Is there a field in the SDP or some configuration change that will allow me to stream my video?
This was the solution How to control bandwidth in WebRTC video call? .
I had heard about changing the offer sdp but dismissed it because I was told that the browser will accept unlimited bandwidth by default and that you'd only need to to this if you want to limit bandwidth. However, adding "b=AS:high number" has fixed all of my problems.

What ways do I have to stream openCV output to my own remote C++ gui?

So I have on one hand an embedded device with a camera running openCV and on the other hand a C++ (Qt) GUI. I would like to connect both i.e.:
"stream" all the output image frames/video from openCV to my remote C++ gui
send commands from my C++ gui to the embedded device
How can I do this, what possibilities do I have? I was thinking about sockets, but I don't know whether that is the easiest solution to stream the image frames from openCV to my Qt gui.
Thank you
You should give us more details about what you're trying to achieve.
You say "stream [...] to my remote C++ GUI": do you mean sending the data over a cabled connection? over a LAN network? over the Internet?
Depending on the answer this changes your system's architecture quite a bit. Especially in case you want to stream the data over the Internet. If your use case implies a LAN network, you can easily setup a peer-to-peer connection between the embedded device and the C++ app to send data. However, it's much more complicated if you want to send data over the Internet, because it is difficult to create a peer-to-peer connection if you don't have static IPs (which I'm assuming you do not have). You will need a server (which can be written with Qt as well) to work as a relay for sending data from the device to your C++ app.
Do you need actual video streaming (at 25fps), or is a low refresh rate (1-0.5fps) sufficient ?
(I'm making the assumption you want to send data over a network)
Because if a low image rate is sufficient, using WebSockets to send images on a regular basis might just do the trick.
Otherwise, you'll need to setup a UDP connection with a video buffer.
Hope this helps!
D

Access clients webcam from Flask server

I am working on a face recognition project using Flask as my web server running on a Ubuntu 14.04 Machine. I am using OpenCV 2.4.9 as my image processing software which is written using Python2.7. I would like to be able to access a clients webcam through their browser to capture a image or frame from the webcam stream and send it back to the server to be processed. Is there an easy way using python to obtain access to the clients webcam or is it possible to use JavaScript in conjunction with my current code.
I'll assume that you are more interested in architectural decisions for you application that specific implementation details. You will need to use client side and server side for this application.
Client side is html page with javascript that will capture images from web cam. There are many resources on internet about this topic. This article explains how it works with some examples. I would recommend to use some javascript library like this one
The next thing is to decide how client application and server side transfers image data. In case you would like to stream webcam video to server, do some computation and stream data back to client application, WebSockets are your friend. This tutorial describes how to set up flask application for websockets.
Much easier approach is to POST image data to the server, do some computation and respond to client. Downside of this approach is that it's not suitable for continuous video processing. But you can use it for single video frame processing. Otherwise you would flooded your server with requests.
The last thing to decide is how much processing is done to images on server side. If you would do some extensive computation that takes long time, I would recommend celery for background tasks. HOWEVER this would change architecture considerably.
For a proof of concept, I would recommend following. Take single image with webcam, post it to server, do quick computation on image and respond with what you've had computed.
Good luck.

WebRTC without actual audio / video device

I am planning (using native API), to establish webRTC session between two clients.
My requirement is to establish webRTC session without using PC's audio/video devices (as I plan to have multiple simultaneous webRTC sessions in same PC).
I am currently following this tutorial and want to know where in these files the following things happen:
open the audio / video device
read from audio / video device (capture)
write to audio / video device (play)
close the audio device
Kindly guide me if somebody knows the file-name / function-name where I need to look for above 4 points.