How to check the media playing status in Gstreamer? - gstreamer

I am developing one application using Gstreamer, I found some Gstreamer APIs like gst_element_get_state but it seems not appropriate match to check the current media playing status.
I am having device that has two separate app one is for GUI and one is for linux app, using GStreamer I want some API which returns the current status of media weather it is plying or not?
Please give some suggestions how to check this.

Related

GStreamer with WebRTC, OpenCV-Server-Client

I don’t know if I can say “I’m sorry for ask” but I spent more than a week looking for a solution without success. I have a Jetson Nano and with OpenCV I get and process an image at 4fps, I need to send this video to a web server to allow the client connected to the server get the video. Everything need to be written in C++.
Because a need a low latency I did test with GStreamer and WebRTC without success. I don’t have any web server ready, so I can use any implementation.
Anyone know where I can find some example implementation with this schema?
You can use mediasoup to send data to the server to then send the stream with rtp to another endpoint like gstreamer or ffmpeg.
Here is a recording project where data is sent from the browser -> server -> gstreamer -> file.
Mediasoup is written in c++ and has a wrapper for js.
I had similar problem and used such example from GStreamer WebRTC official repo. It's written in Python for Janus Gateway video rooms but I think it can be easily rewritten in C++ as you need.
In the code for OpenCV, I used V4L2Loopback as a virtual output device to be used as input for GStreamer WebRTC example.
I hope such approach may help you.
I think no need to send it to a Web Server. In Gstreamer examples [https://github.com/GStreamer/gst-examples]. The SendOnly example sends a video to a Web Client Using WebRTC. You can modify it to send an OpenCV mat.

WebRTC and gstreamer on linux device

I have small computer (something like Arduino or Raspberry pi) with Linux, camera and gstreamer installed on it.
I need to stream h264 video from this device to browser using WebRTC technology. Also, I use NodeJS as signaling server.
In simple words, I need to doing a WebRTC client from my device. What is the best way to do this? Can I use WebRTC Native API for this goal? How can I install it on my small device? Or, maybe, I just need to play with my gstreamer and install some webrtc plugins for it?
Since you will have to use a signalling server anyways, I would say you should use the Janus-Gateway. You mention CentOS for your signalling server, I am not 100% if it will run on CentOS specifically, but I have ran it successfully in Debian Jessie build with just a few dependency installations.
Janus handles the entire call set up with the gateway(signalling and everything). So, some port forwarding will probably have to be done so that the SDP exchange can occur(which you would have to worry about with any signalling server).
Install the gateway, there are a few dependencies but all were simple
installations
Take a look at the janus_streaming plugin. It has a gstreamer example that will stream from a gstreamer pipeline. Also, the streamingtest demo page to see how the Javascript API works for that plugin
The plugin listens on those ports given in the configuration file and will accept traffic from any IP address. So, I expect you can run a gstreamer pipeline on a different machine on the same network and send it to the plugin.
NOTE: You will have to modify the SDP that the JavaScipt sends to the gateway so that it includes H264(probably get rid of all other codecs as well just to force negotiation). You can do this by accessing the sdp through the jsep object passed to the success case for the createOffer function in the janus JavaScript API(jsep.sdp).
Another possibility for you is to use the Kurento Media Server (KMS), which has been written on top of GStreamer. I see two possibilities
You install KMS in a Ubuntu 14.04 box and bridge with your device, so that the device generates the video stream and sends it to the KMS box. From that, you can transcode it to VP9 and distribute it as a WebRTC stream quite easily using kurento client APIs (which may be used from Node.js). The application making the transcoding will require an RtpEndpoint (receiving video form the device in RTP/H.264) connected to a WebRtcEndpoint (capable of sending the video stream through WebRTC). This option is quite simple to implement because it's the standard way of using KMS. However, you will need to generate the RTP/H.264 stream on the device and appropriate SDP for it (this can be done using standard GStreamer elements)
You try to install KMS into your box directly. This might be more complex because it requires compiling KMS to the specific device, which may require some time investment. In addition, performing the transcoding in the device might be too expensive and you could starve its CPU.
Disclaimer: I'm member of the Kurento development team
You mentioned that you used a NodeJS signaling server. Recently Ericsson released an open source WebRTC gstreamer element: http://www.openwebrtc.io/, and along with their release they also published a WebRTC demo using node.js: http://demo.openwebrtc.io:38080/; the code here: https://github.com/EricssonResearch/openwebrtc-examples/tree/master/server.
For WebRTC for Raspberry Pi 2 you may want to consider UV4L. It allows you to stream live Audio & Video from the Rpi to any browser on a PC (HTML5).

Compute OpenCV functions server side on image sent from Android

--EDIT--
I wasn't very well understood with the initial question, so allow me to rephrase.
I am working in an image processing application for Android.
Let's admit I will send an image from android to some server.
What I want to know is how to process this image with opencv (c/c++) on the server and return the results to mobile.
Look into setting up a web service if you're just trying to offload the processing to a server and send back some processed data. There's a ton of examples and sample setups based on the server environment (OS, speed, bandwidth needs, etc) out there that should help you get started. You would then setup the OpenCV environment on the server, and perform all of your processing through those libraries. We would need more information on what type of image processing you hope to accomplish to help you more, but again there are lots of examples for OpenCV and great documentation as well. The Android side will depend on how you setup the web service, so based on that choice there are different solutions available for easily interfacing with your server.

How can we do video recording using VNC

How can we do video recording using VNC? I want to record all the session. we have multiple clients and server. So efficiency is important too.
Anyone has any idea of some opensource project which can handle this. I can think of vncrec only but haven't used it. So nyone who used this project.
Here is some info (Debian centric) about recording and transcoding VNC sessions into movie files including how to use vncrec and other options.

Attach a video stream onto an existing application

Is it possible to show a video that is playing onto an existing application?
Application A is running.
Get Video A and place it on top of Application A and then play it.
Thanks! Cheers!
If you mean to load a video and play it, you can use the DirectShow API, which will use the installed Windows codecs to attempt playback. You can also use ffmpeg for a selection of codecs that may not be installed on the computer.