I'm building a pipeline in Gstreamer. I would like to destroy gst pipeline when the webrtc connection is gone. In this current situation, when webrtc connection is established again, I receive old sdp and offers from other pipelines.
Before the pipeline these commands run for the websocket.
QWebSocket *pSocket = websocketServer->nextPendingConnection();
m_client = pSocket;
m_client->sendTextMessage(QStringLiteral("Initiating WEBRTC handshake"));
QObject::connect(pSocket, &QWebSocket::textMessageReceived, this, &c_module_videostream::processTextMessage);
QObject::connect(pSocket, &QWebSocket::binaryMessageReceived, this, &c_module_videostream::processBinaryMessage);
QObject::connect(pSocket, &QWebSocket::disconnected, this, &c_module_videostream::socketDisconnected);
QObject::connect(this, &c_module_videostream::s_JSONReadyToSend, this, &c_module_videostream::sendJSONTextMessage);
QObject::connect(this, &c_module_videostream::s_needImageData, this, &c_module_videostream::startImageDataStream);
QObject::connect(this, &c_module_videostream::s_enoughImageData, this, &c_module_videostream::stopImageDataStream);
Pipeline is created with this command.
pipeline_description = pipeline_description + "appsrc name=CaliCam ! " + "video/x-raw, format=BGR, width=" + width + ", height=" + height + ", framerate=" + fps + "/1 ! " + "videoconvert !"
//"queue max-size-buffers=10 ! "
+ "x264enc bitrate=1000 speed-preset=ultrafast tune=zerolatency key-int-max=10 ! " + "video/x-h264,profile=constrained-baseline !" + "h264parse ! " + "rtph264pay config-interval=-1 name=payloader ! " + "application/x-rtp, media=video, encoding-name=H264, payload=96 !" + "webrtcbin name=webrtcbin_send";
pipeline = gst_parse_launch(pipeline_description.c_str(), &error);
After that here is the start command of the pipeline.
// Start the Pipeline
RCLCPP_INFO_STREAM(nh->get_logger(), MODULE_NAME << "Starting Pipeline");
GstState cur_state;
int returnvalue;
do
{
RCLCPP_INFO_STREAM(nh->get_logger(), MODULE_NAME << "Starting...");
returnvalue = gst_element_set_state(GST_ELEMENT(pipeline), GST_STATE_PLAYING);
usleep(2000000);
gst_element_get_state(GST_ELEMENT(pipeline), &cur_state, NULL, GST_CLOCK_TIME_NONE);
RCLCPP_INFO_STREAM(nh->get_logger(), MODULE_NAME << "Pipeline status " << cur_state << "/" << returnvalue << "/" << (int)GST_STATE_PLAYING);
} while (cur_state != GST_STATE_PLAYING);
// Pipeline is open, Subscribe to Image transport
image_transport_Subscriber = nh->create_subscription<sensor_msgs::msg::Image>("calicam_front/left/image_rect", 1, std::bind(&c_module_videostream::imageCb, this, std::placeholders::_1));
RCLCPP_INFO_STREAM(nh->get_logger(), MODULE_NAME << "subs::" << this->image_transport_Subscriber);
RCLCPP_INFO_STREAM(nh->get_logger(), MODULE_NAME << "Starting Pipeline DONE");
pushImages = true;
I tried to clear the pipeline with these functions but it still remains.
gst_element_set_state(this->pipeline, GST_STATE_NULL);
gst_object_unref(GST_OBJECT(this->pipeline));
gst_object_unref(GST_OBJECT(this->webrtcbin));
gst_object_unref(GST_OBJECT(this->appsrc));
Related
I have a device running embedded linux that can show RTSP streams from a camera. The user can change the stream from a windowed stream to a full screen stream, and vice versa. If the stream is changed 32 times, the stream stops working. I have possibly narrowed down the problem to the rtspsrc itself.
My question is, how does one clear the memory for the gst "stuff" without re-starting the program?
If I use gst-launch-1.0 with the pipeline, it works for more than 32 re-starts because the program is being killed every time.
However, if I run my program and increase the rtspsrc to 31 (by switching between the two streams), and then run gst-launch-1.0 with a rtsp pipeline, the steam does not show up! It appears that until every program that is using gst is killed, the rtspsrc will not reset back to 0.
I enabled debugging the rtspsrc:
export GST_DEBUG="rtspsrc:6"
Lots of log messages are shown each time the stream is started. They print the rtspsrcX, which increases even though the previous stream is stopped:
First run log print:
**rtspsrc gstrtspsrc.c:8834:gst_rtspsrc_print_sdp_media:<rtspsrc0> RTSP response message**
Second run:
**rtspsrc gstrtspsrc.c:8855:gst_rtspsrc_print_sdp_media:<rtspsrc1> RTSP response message**
Continue stopping/starting the stream, and it increases up to 31, at which point the stream no longer shows up:
**rtspsrc gstrtspsrc.c:8855:gst_rtspsrc_print_sdp_media:<rtspsrc31> RTSP response message**
I'm not sure how to "reset" the stream each time the user stops it. It seems that gst can't release memory unless I kill the whole program (all programs using gst).
I have tried creating a new context each time the stream is re-started, but this doesn't help.
When I call gst_is_initialized each subsequent time, it returns true.
The main loop is stopped by calling the following from another thread:
g_main_loop_quit(loop_);
The video feeds are controlled with the following:
GMainLoop *loop_;
pipeline = "rtspsrc location=rtsp://192.168.0.243/0 latency=0 ! rtph264depay ! h264parse ! imxvpudec ! imxipuvideosink window-width=512 window-height=384 sync=false"
or
pipeline = "rtspsrc location=rtsp://192.168.0.243/0 latency=0 ! rtph264depay ! h264parse ! imxvpudec ! imxipuvideosink window-width=1024 window-height=768 sync=false"
void stream_video(std::string pipeline)
{
GMainContext* context;
GstElement *pipelineElement;
GstBus *bus = NULL;
guint bus_watch_id = 0;
GstState state;
try
{
if(!gst_is_initialized())
{
std::cout << "GST Is not initialized - initializing " << pipeline.c_str();
gst_init_check(nullptr,nullptr,nullptr);
}
context = g_main_contextnew(); // Creating a new context to see if the camera can be started more than 32 times, but the rtspsrc still increases when debugging
loop_ = g_main_loopnew (context, FALSE);
pipelineElement = gst_parse_launch(pipeline.c_str(), NULL);
bus = gst_pipeline_get_bus (GST_PIPELINE (pipelineElement));
bus_watch_id = gst_bus_add_watch (bus, bus_call, loop_);
gst_object_unref (bus);
bus = NULL;
gst_element_set_state(pipelineElement, GST_STATE_READY );
gst_element_set_state(pipelineElement, GST_STATE_PAUSED );
gst_element_set_state(pipelineElement, GST_STATE_PLAYING);
if (gst_element_get_state (pipelineElement, &state, NULL, 2*GST_SECOND) == GST_STATE_CHANGE_FAILURE)
{
std::cout << "gst: Failed to chage states State:" << state << " ID: " << stream_id_;
}
else
{
std::cout << "gst: Running..." << " ID: " << stream_id_ << " State:" << state << " Loop:" << loop_;
g_main_looprun (loop_); // blocks until loop_ exits (EOS, error, stop request)
}
gst_element_set_state(pipelineElement, GST_STATE_PAUSED);
gst_element_set_state(pipelineElement, GST_STATE_READY );
gst_element_set_state(pipelineElement, GST_STATE_NULL); // Can only switch between certian states, see https://gstreamer.freedesktop.org/documentation/additional/design/states.html?gi-language=c
g_source_remove (bus_watch_id);
std::cout << "gst: Removing pipelineElement " << pipelineElement;
gst_object_unref (GST_OBJECT (pipelineElement));
pipelineElement = NULL;
g_main_contextunref (context);
context = NULL;
g_main_loopunref (loop_);
loop_ = nullptr;
std::cout << "gst: Deleted pipeline" << " ID: " << stream_id_ << " State: " << state;
}
catch(const std::exception& e)
{
std::cout << "Error Caught: stream_video " << e.what();
}
return;
}
I am implementing a video streaming pipeline using gst-rtsp-server. I need to know when an RTSP client both connects and disconnects.
From the examples provided with gst-rtsp-server, I can detect a client connecting using the "client-connected" signal of the GstRTSPServer. I'm looking for something similar for when the client disconnects.
I have tried the "closed" and "teardown-request" signals of GstRTSPClient, but those don't do anything when I disconnect the client.
I have also tried calling the following function on a timer, like it is done in several examples. I would expect that to print "Removed 1 sessions" at some point after I've terminated the client, but it never does.
static gboolean
remove_sessions (GstRTSPServer * server)
{
GstRTSPSessionPool *pool;
pool = gst_rtsp_server_get_session_pool (server);
guint removed = gst_rtsp_session_pool_cleanup (pool);
g_object_unref (pool);
g_print("Removed %d sessions\n", removed);
return TRUE;
}
My client is the following gstreamer pipeline:
gst-launch-1.0 -v rtspsrc location=rtsp://$STREAM_IP:8554/test latency=50 ! queue ! rtph264depay ! queue ! avdec_h264 ! autovideosink sync=false
How can I detect client disconnections?
Call gst_rtsp_server_client_filter() when need to close RTSP server (before server deletion):
GstRTSPFilterResult clientFilterFunc(GstRTSPServer* server, GstRTSPClient* client, gpointer user)
{
return GST_RTSP_FILTER_REMOVE;
}
. . .
{
. . .
if( clientCount )
gst_rtsp_server_client_filter(server, clientFilterFunc, nullptr);
if (G_IS_OBJECT(server))
{
g_object_unref(server);
server = nullptr;
}
. . .
}
Code snipped for client connection and close:
{
void clientClosed(GstRTSPClient* client, gpointer user)
{
--clientCount ;
std::stringstream strm;
strm << "Client closed ... count: " << ptrTestData->m_clientCount << std::endl;
g_print("%s", strm.str().c_str());
}
void clientConnected(GstRTSPServer* server, GstRTSPClient* client, gpointer user)
{
++clientCount ;
// hook the client close callback
g_signal_connect(client, "closed", reinterpret_cast<GCallback>(clientClosed), user);
std::stringstream strm;
strm << "Client connected ... count: " << ptrTestData->m_clientCount << std::endl;
g_print("%s", strm.str().c_str());
}
{
. . .
g_signal_connect(server, "client-connected", reinterpret_cast<GCallback>(clientConnected), &(testData));
. . .
}
}
Not sure what problems I had before, but this actually works:
When the client is shut down (Ctrl+C on the gst-launch-1.0 pipeline), the "teardown-request" signal of GstRTSPClient is emitted.
If the client loses connection to the server, the remove_sessions (GstRTSPServer * server) function I posted will report that it removed a session after some time.
My Gstreamer version is 1.17, cross compiled using instructions from here.
Here is my gstreamer pipeline,
appsrc name=framesrc0 do-timestamp=true format=time ! video/x-raw,width=640,height=480,framerate=30/1,format=NV12 ! queue ! x264enc ! queue ! h264parse ! mpegtsmux ! filesink name=mysink location=./myfile.ts
I feed NV12 frames to appsrc using the below function (6404801.5 = 460800 bytes)
bool BelGst::FeedData0(uint8_t *buf, uint32_t len)
{
GstFlowReturn ret;
GstBuffer *buffer;
GstMapInfo info;
timespec ts_beg, ts_end;
uint32_t time_ms;
clock_gettime(CLOCK_MONOTONIC, &ts_beg);
ret = gst_buffer_pool_acquire_buffer (pool0, &buffer, NULL);
if (G_UNLIKELY (ret != GST_FLOW_OK)) {
cout << "BufferPool pool0 failed" << endl;
return FALSE;
}
clock_gettime(CLOCK_MONOTONIC, &ts_end);
time_ms = (ts_end.tv_sec - ts_beg.tv_sec)*1000 + (ts_end.tv_nsec - ts_beg.tv_nsec) / 1e6;
cout << "Buffer pool acquire time = " << time_ms << "ms" << endl;
/* Set its timestamp and duration */
GST_BUFFER_TIMESTAMP(buffer) = timestamp0;
GST_BUFFER_DURATION(buffer) = gst_util_uint64_scale(1, GST_SECOND, 30);
GST_BUFFER_OFFSET(buffer) = offset0++;
timestamp0 += GST_BUFFER_DURATION(buffer);
gst_buffer_map(buffer, &info, GST_MAP_WRITE);
memcpy(info.data, buf, len);
gst_buffer_unmap(buffer, &info);
g_signal_emit_by_name(app_source0, "push-buffer", buffer, &ret);
gst_buffer_unref(buffer);
return TRUE;
}
I've setup the buffer pool as shown below,
void BufferPoolSetup(GstBufferPool *&pool)
{
GstStructure *config;
int size, min, max;
GstCaps *caps;
pool = gst_buffer_pool_new();
/* get config structure */
config = gst_buffer_pool_get_config(pool);
size = 640*480*1.5;
min = 1;
max = 4;
caps = gst_caps_from_string("video/x-raw");
/* set caps, size, minimum and maximum buffers in the pool */
gst_buffer_pool_config_set_params (config, caps, size, min, max);
gst_caps_unref(caps);
gst_buffer_pool_set_config (pool, config);
/* and activate */
gst_buffer_pool_set_active (pool, TRUE);
return;
}
When I run the pipeline, I see that the function gst_buffer_pool_acquire_buffer is taking somewhere between 20ms to 60ms. Could someone point if there is something wrong in my approach? Am I missing something?
I am facing a problem with OpenH264 library https://github.com/cisco/openh264
I would like to decode a stream sent by my raspberry pi camera over network on my C++/Qt Program which will display the image.
I'm using gstreamer on Raspberry Side to send the stream with this command line :
raspivid -n -t 0 -w 1280 -h 720 -fps 25 -b 2500000 -o - | gst-launch-1.0 fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=my_ip port=5000
On desktop side when I execute :
gst-launch-1.0 -v tcpclientsrc host=raspberry_ip port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false
I'm able to see the stream of the camera correctly.
Okay, so now. I would like to make my own decoder using QT/C++ & OpenH264 decoder.
Here is my code :
void Manager::initDecoder(int width, int height) {
long ret = WelsCreateDecoder(&(this->decoder));
if (ret == 0) {
this->decodingParam = { 0 };
this->decodingParam.sVideoProperty.eVideoBsType = VIDEO_BITSTREAM_DEFAULT;
this->decoder->Initialize(&decodingParam);
this->bufferInfo = { 0 };
this->yuvData = new uint8_t*[3];
this->yuvData[0] = new uint8_t[width * height];
this->yuvData[1] = new uint8_t[width * height / 4];
this->yuvData[2] = new uint8_t[width * height / 4];
this->tcpSocket->connectToHost("ip_raspberry", 5000);
}
}
bool Manager::decodeStream(const unsigned char *rawEncodedData, const int rawEncodedDataLength, uint8_t **yuvData) {
DECODING_STATE err = decoder->DecodeFrame2(rawEncodedData, rawEncodedDataLength, yuvData, &bufferInfo);
if (err != 0) {
qDebug() << "H264 decoding failed. Error code: " << err << ".";
return false;
}
qDebug() << "----------- Succeedeeed --------------";
return true;
}
When I get a new data I call decodeStream but this function is returning an error :
dsBitstreamError = 0x04, ///< error bitstreams(maybe broken internal frame) the decoder cared
dsNoParamSets = 0x10, ///< no parameter set NALs involved
I don't know what I'm doing wrong... Should I change some parameters on the raspberry sending ?
Thanks for helping.
I am new to Gstreamer and I have a question about why my elements will not link together. Here is my code:
CustomData data;
data.videosource = gst_element_factory_make("uridecodebin", "source");
cout << "Created source element " << data.videosource << endl;
data.demuxer = gst_element_factory_make("qtdemux", "demuxer");
cout << "Created demux element " << data.demuxer << endl;
data.decoder = gst_element_factory_make("ffdec_h264", "video-decoder");
cout << "Went to the video path " << data.decoder << endl;
data.videoconvert = gst_element_factory_make("ffmpegcolorspace", "convert");
cout << "Created convert element " << data.videoconvert << endl;
data.videosink = gst_element_factory_make("autovideosink", "sink");
cout << "Created sink element " << data.videosink << endl;
if (!data.videosource ||!data.demuxer || !data.decoder || !data.videoconvert || !data.videosink)
{
g_printerr ("Not all elements could be created.\n");
system("PAUSE");
return;
}
//Creating the pipeline
data.pipeline = gst_pipeline_new("video-pipeline");
if (!data.pipeline)
{
g_printerr ("Pipeline could not be created.");
}
//Setting up the object
g_object_set(data.videosource, "uri", videoFileName[camID] , NULL);
//videoFileName[camID] is a char** with the content uri=file:///C://videofiles/...mp4
//Adding elements to the pipeline
gst_bin_add_many(GST_BIN (data.pipeline), data.videosource, data.demuxer, data.decoder, data.videoconvert, data.videosink, NULL);
//This is where the issue occurs
if(!gst_element_link(data.videosource, data.demuxer)){
g_printerr("Elements could not be linked. \n");
system("PAUSE");
return;
}
What I am trying to do is to break down a mp4 file and display only the video content but for some reason when I try to link source and demuxer, it comes out as false.
Thank you guys so much!
Let's have a look at the pipeline you're using (I'll use gst-launch here for its brevity, but the same goes for any GStreamer pipelines):
gst-launch uridecodebin uri=file:///path/to/movie.avi \
! qtdemux ! ffdec_h264 ! ffmpegcolorspace \
! autovideosink
gst-inspect uridecodebin states:
Autoplug and decode an URI to raw media
So uridecodebin takes any audio/video source and decodes it by internally using some of GStreamer's other elements.
Its output is something like video/x-raw-rgb or audio/x-raw-int (raw audio/video)
qtdemux on the other hand takes a QuickTime stream (still encoded) and demuxes it.
But what it gets in your example is the already decoded raw video (which is why it won't link).
So, you've basically got two options:
just use uridecodebin
gst-launch uridecodebin uri=file:///path/to/movie.avi \
! autovideosink
which will allow your pipeline to decode pretty much any video file
just use the qtdemux ! ffdec_h264 ! ffmpegcolorspace elements:
gst-launch filesrc=/path/to/movie.avi \
! qtdemux ! ffdec_h264 ! ffmpegcolorspace
! autovideosink
Keep in mind however that your pipeline doesn't play audio.
To get that as well do one of the following:
Simply use playbin2
gst-launch playbin2 uri=file:///path/to/movie.avi
Connect your decodebin to an audio sink as well
gst-launch uridecodebin name=d uri=... ! autovideosink d. ! autoaudiosink