I've written a cpp code, based on the Onvif example code:
https://github.com/GStreamer/gst-rtsp-server/blob/master/examples/test-onvif-backchannel.c
I have managed to stream and a regular RTSP stream but when I try to read it with onvif client it just work, and say the stream is not Onvif compatible.
What do I wrong?
My gst version is 1.14.5
I think you might have to upgrade the gstreamer library in your board/PC to allow for onvif... the best is to upgrade to 1.18 or even yet 1.20. They would have support for onvif replay and backchannel which is not standard rtsp
otherwise you may want to try to write and extend the library for the features you need
Related
Hello everyone,
I am trying to implement low-latency video streaming using WebRTC. I write my code in C++ (websocket etc.), use only webrtc signalling server which is written in Python (ref1).
When I use a webcam, I do not have any problem streaming video to the client, however, I try to use the FLIR camera, I get a lot of problems while implementation.
There are a few questions in my mind to clear. I hope you guys give me some recommendations.
Is there any specific data-type that I should do pipeline to webrtc as a source? I just would like to know what kind of data I should send as a source in webrtc?
I try to send an image to check whether my WebRTC implementation works properly (except webcam), it gives me the error "Pipeline is empty". What can cause this problem? This is actually the main problem why I would like to know data type etc. to understand what exactly I should pipe into webrtc.
ref1: https://github.com/centricular/gstwebrtc-demos/tree/master/signalling
P.S.:
Client and Jetson Nano in the network
Server for signals is running on Jetson Nano
By running gst-inspect-1.0 webrtcbin you will find that both source and sink capability for this plugin is just application/x-rtp.
Therefore, if you want webrtcbin to work as a source pad, you will need to pipe it to some sort of RTP depayloader such as rtph264depay for video and rtpopusdepay for audio.
The project I'm currently working on, requires an addition to the already existing VoIP capabilities. The core for speech processing is in C, the remainder is in C++ with Qt - the audio is handled via portaudio. The connection between users is currently established via UDP, which I think has to be changed for the planned video connection. Developing platform is Windows on VS2012 - however, the system is cross-platform.
In a nutshell, what I want to do is: Grab the video signal from a webcam, synchronize audio coming from C core and video from webcam and use a library and codecs for (de-)coding/muxing the signals on the respective sides and sending via RTP. The system should be capable of multicast transmission.
I did some research for possible libraries and stumbled upon ffmpeg and libVLC. For the codec I thought about using x264. And if I'm correct, ffmpeg and libVLC should both be capable of what I'm looking for?
However I'm not sure which one to pick, and from their documentations I really can't extract, which library is the better fit. Has anybody had similar problems and can help me out - I'm quite a newbie, when it comes to video processing and encoding.
Extra question: Do you have any hints or approaches on syncing the video and audio signals?
If anyone is interested, this is what I ended up doing:
I am using the WebM container format, VP8 with Vorbis currently (but going to change to VP9 with Opus soon if out of beta), handled by ffmpeg/libav libraries for encoding/decoding/muxing etc. and SDL for displaying and threading. ffmpeg/libav was cross-compiled on Unix with LGPL support to keep our project closed source.
I am working on a webcam streaming server project, using Live555 as the server.
I need to be able to stream from ordinary USB Webcams, which requires me to implement DeviceSource.cpp as I read on live555's FAQ page. However, I am currently not having enough knowledge or clue on how to implement this.
I intend to use ffmpeg as the encoder.
Can anybody provide me with some proper directions that I can follow?
does somebody have instructions, how do to make a RTSP client with Qt? I have already heard of live555, but I don't know how to link it with Qt.
Is there another way?
I would like to do it with Qt, so that it also runs under Linux and other platformens.
To have a RTSP client, you need to process the RTSP protocol one way or another.
Live555 is one way to do that, it is just a C++ library that can be linked with other applications, including Qt. It is certainly possible to link Live555 with Qt.
Another way would be to write your own RTSP client based off the RFC spec .
The last option would be to just use the Phonon framework in Qt. http://doc.trolltech.com/4.6/phonon-overview.html (provided your Phonon backend will support RTSP). This is the easiest way as Qt and the system handle all the backend decoding of the media, integrates seamlessly with Qt and does not require extra libraries to be linked in with your app.
I want to create a Qt widget that can play incoming RTP streams where the video is encoded as H264 and contains no audio.
My basic plan for implementation is this:
Create a Phonon MediaSource object (Stream type).
Connect it with a QIODevice subclass that provides the data
Obtain the video data using either:
The JRTPLIB client library
The GStreamer gstrtpbin plugin. This plugin takes care depayloading the packages and decoding the video. Maybe this improves the chances that Phonon will recognize the data.
My environment:
Ubuntu 9.10
Qt 4.6
My questions:
Is my approach a good one? Perhaps I'm overlooking a more obvious or simple solution?
I'm currently experiencing this issue: when trying to play the video stream the state of the MediaObject turns to ErrorState with errorType FatalError. Can anyone tell me what I'm doing wrong?
Edit
One solution I found is using libVLC in combination with Qt, which I learned about in this thread. Here's a code sample for the interested.
I'm still looking for a Phonon-based solution.
Ideally I would only need to provide an SDP file and job is done.
I was able to get it to work using the libVLC solution. I can't garantuee that this is the best solution though as I simply stopped looking after that.
Here's a link to the libVLC sample.
The way I understand Phonon works at least in Windows is that QT provides a phonon backend plugin for DirectShow (\plugins\phonon_backend\phonon_ds94.dll) and GStreamer in your case. Then you would either obtain or write your own DirectShow filter which can accept RTP streams as a source. DirectShow takes care of the decoding, and Phonon will take care of the rendering.
So if the backend works, the application code is as simple as:
Phonon::MediaObject *media = new Phonon::MediaObject();
Phonon::VideoWidget *video = new Phonon::VideoWidget();
Phonon::createPath(media, video);
media->setCurrentSource(source);
media->play();
Seems that the problem lies with the GStreamer backend accepting RTP as a source. Can you playback that source in standalone GStreamer without any problems?