Video Streaming application using GStreamer - gstreamer

I would like to create a GStreamer application for streaming video/audio over wireless network.For codec I will use H.264 .Please advice transmitting the data what should I use MPEG2-TS or RTP? I am not sure from where I should start to write the application.
I will work in c/c++ in ubuntu.Please help.
Thanks in advance.

Rtp is commonly used. You can have a look at examples under gst-plugins-good/tests/examples/rtp/.

Related

Gstreamer webrtc pipeline problem for open source camera

Hello everyone,
I am trying to implement low-latency video streaming using WebRTC. I write my code in C++ (websocket etc.), use only webrtc signalling server which is written in Python (ref1).
When I use a webcam, I do not have any problem streaming video to the client, however, I try to use the FLIR camera, I get a lot of problems while implementation.
There are a few questions in my mind to clear. I hope you guys give me some recommendations.
Is there any specific data-type that I should do pipeline to webrtc as a source? I just would like to know what kind of data I should send as a source in webrtc?
I try to send an image to check whether my WebRTC implementation works properly (except webcam), it gives me the error "Pipeline is empty". What can cause this problem? This is actually the main problem why I would like to know data type etc. to understand what exactly I should pipe into webrtc.
ref1: https://github.com/centricular/gstwebrtc-demos/tree/master/signalling
P.S.:
Client and Jetson Nano in the network
Server for signals is running on Jetson Nano
By running gst-inspect-1.0 webrtcbin you will find that both source and sink capability for this plugin is just application/x-rtp.
Therefore, if you want webrtcbin to work as a source pad, you will need to pipe it to some sort of RTP depayloader such as rtph264depay for video and rtpopusdepay for audio.

Which functions of Live555 is used in vlc for 'network-caching' option

I know vlc uses live555 for rtsp streaming. There is an option in Open Media dialog in vlc when opening a network stream which tells vlc to buffer stream for some seconds before starting to play it. The option is 'network-caching' in milliseconds. I want to know which functions in live555 is used in vlc to have this feature? I have tried increaseReceiveBufferTo(...) and ReorderingPacketBuffer::setThresholdTime(...) but they don't do this behavior.
Thanks
This parameter is not directly used in the live555 or rtsp module. Looking at the source code shows that the value is used to adjust the presentation timestamps. Since Live555 simply gives you out packets with the timestamps as they were on the RTP level, you have to implement it yourself.

how to create AVPacket manually with encoded AAC data to send mpegts?

I'm developing app which sends mpeg2ts stream using FFMPEG API.(avio_open, avformat_new_stream etc..)
The problem is that the app already has AAC-LC audio so audio frame does not need to be encoded because my app just bypass data received from socket buffer.
To open and send mpegts using FFMPEG, I must have AVFormattContext data which is created from FFMPEG API for encoder as far as I know.
Can I create AVFormatContext manually with encoded AAC-LC data? or I should decode and encode the data? The information I know is samplerate, codec, bitrate..
Any help will be greatly appreciated. Thanks in advance.
Yes, you can use the encoded data as-is if your container supports it. There are two steps involved here - encoding and muxing. Encoding compress the data, muxing mixes it together in the output file, so the packets are properly interleaved. Muxing example in FFMpeg distribution helped me with this.
You might also take a look at the following class: https://sourceforge.net/p/karlyriceditor/code/HEAD/tree/src/ffmpegvideoencoder.cpp - this file is from one of my projects, and contains video encoder. Starting from the line 402 you'll see the setup for non-converted audio - it is kind of a hackish way, but it worked. Unfortunately I still end up reencoding audio because for my formats it was not possible to achieve frame-perfect synchronization which I needed

How do I implement DeviceSource for streaming from webcam in Live555?

I am working on a webcam streaming server project, using Live555 as the server.
I need to be able to stream from ordinary USB Webcams, which requires me to implement DeviceSource.cpp as I read on live555's FAQ page. However, I am currently not having enough knowledge or clue on how to implement this.
I intend to use ffmpeg as the encoder.
Can anybody provide me with some proper directions that I can follow?

Play RTP video stream using Qt?

I want to create a Qt widget that can play incoming RTP streams where the video is encoded as H264 and contains no audio.
My basic plan for implementation is this:
Create a Phonon MediaSource object (Stream type).
Connect it with a QIODevice subclass that provides the data
Obtain the video data using either:
The JRTPLIB client library
The GStreamer gstrtpbin plugin. This plugin takes care depayloading the packages and decoding the video. Maybe this improves the chances that Phonon will recognize the data.
My environment:
Ubuntu 9.10
Qt 4.6
My questions:
Is my approach a good one? Perhaps I'm overlooking a more obvious or simple solution?
I'm currently experiencing this issue: when trying to play the video stream the state of the MediaObject turns to ErrorState with errorType FatalError. Can anyone tell me what I'm doing wrong?
Edit
One solution I found is using libVLC in combination with Qt, which I learned about in this thread. Here's a code sample for the interested.
I'm still looking for a Phonon-based solution.
Ideally I would only need to provide an SDP file and job is done.
I was able to get it to work using the libVLC solution. I can't garantuee that this is the best solution though as I simply stopped looking after that.
Here's a link to the libVLC sample.
The way I understand Phonon works at least in Windows is that QT provides a phonon backend plugin for DirectShow (\plugins\phonon_backend\phonon_ds94.dll) and GStreamer in your case. Then you would either obtain or write your own DirectShow filter which can accept RTP streams as a source. DirectShow takes care of the decoding, and Phonon will take care of the rendering.
So if the backend works, the application code is as simple as:
Phonon::MediaObject *media = new Phonon::MediaObject();
Phonon::VideoWidget *video = new Phonon::VideoWidget();
Phonon::createPath(media, video);
media->setCurrentSource(source);
media->play();
Seems that the problem lies with the GStreamer backend accepting RTP as a source. Can you playback that source in standalone GStreamer without any problems?