I am implementing a video streaming pipeline using gst-rtsp-server. I need to know when an RTSP client both connects and disconnects.
From the examples provided with gst-rtsp-server, I can detect a client connecting using the "client-connected" signal of the GstRTSPServer. I'm looking for something similar for when the client disconnects.
I have tried the "closed" and "teardown-request" signals of GstRTSPClient, but those don't do anything when I disconnect the client.
I have also tried calling the following function on a timer, like it is done in several examples. I would expect that to print "Removed 1 sessions" at some point after I've terminated the client, but it never does.
static gboolean
remove_sessions (GstRTSPServer * server)
{
GstRTSPSessionPool *pool;
pool = gst_rtsp_server_get_session_pool (server);
guint removed = gst_rtsp_session_pool_cleanup (pool);
g_object_unref (pool);
g_print("Removed %d sessions\n", removed);
return TRUE;
}
My client is the following gstreamer pipeline:
gst-launch-1.0 -v rtspsrc location=rtsp://$STREAM_IP:8554/test latency=50 ! queue ! rtph264depay ! queue ! avdec_h264 ! autovideosink sync=false
How can I detect client disconnections?
Call gst_rtsp_server_client_filter() when need to close RTSP server (before server deletion):
GstRTSPFilterResult clientFilterFunc(GstRTSPServer* server, GstRTSPClient* client, gpointer user)
{
return GST_RTSP_FILTER_REMOVE;
}
. . .
{
. . .
if( clientCount )
gst_rtsp_server_client_filter(server, clientFilterFunc, nullptr);
if (G_IS_OBJECT(server))
{
g_object_unref(server);
server = nullptr;
}
. . .
}
Code snipped for client connection and close:
{
void clientClosed(GstRTSPClient* client, gpointer user)
{
--clientCount ;
std::stringstream strm;
strm << "Client closed ... count: " << ptrTestData->m_clientCount << std::endl;
g_print("%s", strm.str().c_str());
}
void clientConnected(GstRTSPServer* server, GstRTSPClient* client, gpointer user)
{
++clientCount ;
// hook the client close callback
g_signal_connect(client, "closed", reinterpret_cast<GCallback>(clientClosed), user);
std::stringstream strm;
strm << "Client connected ... count: " << ptrTestData->m_clientCount << std::endl;
g_print("%s", strm.str().c_str());
}
{
. . .
g_signal_connect(server, "client-connected", reinterpret_cast<GCallback>(clientConnected), &(testData));
. . .
}
}
Not sure what problems I had before, but this actually works:
When the client is shut down (Ctrl+C on the gst-launch-1.0 pipeline), the "teardown-request" signal of GstRTSPClient is emitted.
If the client loses connection to the server, the remove_sessions (GstRTSPServer * server) function I posted will report that it removed a session after some time.
Related
I have a device running embedded linux that can show RTSP streams from a camera. The user can change the stream from a windowed stream to a full screen stream, and vice versa. If the stream is changed 32 times, the stream stops working. I have possibly narrowed down the problem to the rtspsrc itself.
My question is, how does one clear the memory for the gst "stuff" without re-starting the program?
If I use gst-launch-1.0 with the pipeline, it works for more than 32 re-starts because the program is being killed every time.
However, if I run my program and increase the rtspsrc to 31 (by switching between the two streams), and then run gst-launch-1.0 with a rtsp pipeline, the steam does not show up! It appears that until every program that is using gst is killed, the rtspsrc will not reset back to 0.
I enabled debugging the rtspsrc:
export GST_DEBUG="rtspsrc:6"
Lots of log messages are shown each time the stream is started. They print the rtspsrcX, which increases even though the previous stream is stopped:
First run log print:
**rtspsrc gstrtspsrc.c:8834:gst_rtspsrc_print_sdp_media:<rtspsrc0> RTSP response message**
Second run:
**rtspsrc gstrtspsrc.c:8855:gst_rtspsrc_print_sdp_media:<rtspsrc1> RTSP response message**
Continue stopping/starting the stream, and it increases up to 31, at which point the stream no longer shows up:
**rtspsrc gstrtspsrc.c:8855:gst_rtspsrc_print_sdp_media:<rtspsrc31> RTSP response message**
I'm not sure how to "reset" the stream each time the user stops it. It seems that gst can't release memory unless I kill the whole program (all programs using gst).
I have tried creating a new context each time the stream is re-started, but this doesn't help.
When I call gst_is_initialized each subsequent time, it returns true.
The main loop is stopped by calling the following from another thread:
g_main_loop_quit(loop_);
The video feeds are controlled with the following:
GMainLoop *loop_;
pipeline = "rtspsrc location=rtsp://192.168.0.243/0 latency=0 ! rtph264depay ! h264parse ! imxvpudec ! imxipuvideosink window-width=512 window-height=384 sync=false"
or
pipeline = "rtspsrc location=rtsp://192.168.0.243/0 latency=0 ! rtph264depay ! h264parse ! imxvpudec ! imxipuvideosink window-width=1024 window-height=768 sync=false"
void stream_video(std::string pipeline)
{
GMainContext* context;
GstElement *pipelineElement;
GstBus *bus = NULL;
guint bus_watch_id = 0;
GstState state;
try
{
if(!gst_is_initialized())
{
std::cout << "GST Is not initialized - initializing " << pipeline.c_str();
gst_init_check(nullptr,nullptr,nullptr);
}
context = g_main_contextnew(); // Creating a new context to see if the camera can be started more than 32 times, but the rtspsrc still increases when debugging
loop_ = g_main_loopnew (context, FALSE);
pipelineElement = gst_parse_launch(pipeline.c_str(), NULL);
bus = gst_pipeline_get_bus (GST_PIPELINE (pipelineElement));
bus_watch_id = gst_bus_add_watch (bus, bus_call, loop_);
gst_object_unref (bus);
bus = NULL;
gst_element_set_state(pipelineElement, GST_STATE_READY );
gst_element_set_state(pipelineElement, GST_STATE_PAUSED );
gst_element_set_state(pipelineElement, GST_STATE_PLAYING);
if (gst_element_get_state (pipelineElement, &state, NULL, 2*GST_SECOND) == GST_STATE_CHANGE_FAILURE)
{
std::cout << "gst: Failed to chage states State:" << state << " ID: " << stream_id_;
}
else
{
std::cout << "gst: Running..." << " ID: " << stream_id_ << " State:" << state << " Loop:" << loop_;
g_main_looprun (loop_); // blocks until loop_ exits (EOS, error, stop request)
}
gst_element_set_state(pipelineElement, GST_STATE_PAUSED);
gst_element_set_state(pipelineElement, GST_STATE_READY );
gst_element_set_state(pipelineElement, GST_STATE_NULL); // Can only switch between certian states, see https://gstreamer.freedesktop.org/documentation/additional/design/states.html?gi-language=c
g_source_remove (bus_watch_id);
std::cout << "gst: Removing pipelineElement " << pipelineElement;
gst_object_unref (GST_OBJECT (pipelineElement));
pipelineElement = NULL;
g_main_contextunref (context);
context = NULL;
g_main_loopunref (loop_);
loop_ = nullptr;
std::cout << "gst: Deleted pipeline" << " ID: " << stream_id_ << " State: " << state;
}
catch(const std::exception& e)
{
std::cout << "Error Caught: stream_video " << e.what();
}
return;
}
I am using gRPC sync api with C++.
Here is how on server side I am checking if the client has stopped the stream.
grpc::Status AuthServer::ConnectServiceImpl::HearthBeat(grpc::ServerContext *context,
grpc::ServerReaderWriter<Pulse, Pulse> *stream) {
Pulse note;
if(ctx_.IsCancelled()){
std::cout << "DISCONNECT" << std::endl;
}
while (stream->Read(¬e)) {
Pulse reply;
reply.set_rate(note.rate()+1);
std::cout << "RECEIVED: " << note.rate() << std::endl;
stream->Write(reply);
}
return grpc::Status::OK;
}
This is bidi stream which is stopped forcefully on client side with killing the client app and still the "DISCONNECT" message does not appear.
Why is that, am I using IsCancelled() not correctly?
I think I already answered this in GRPC/C++ - How to detect client disconnected in Async Server.
Your code appears to be checking IsCancelled() on ctx_. I'm not sure what that object is, but the context you want to be checking is the one passed into the request handler method as context.
I am writing a simple client/server communication with fifo but i am stuck at using a signal handler to process client request.
The server open a fifo in readonly and non blocking mode, read datas received and writes back some datas to the client fifo.
And this actually works fine when there is no signal handler on the server side. Here is the main code for both sides.
Server :
int main(int argc, char *argv[])
{
// install handler
struct sigaction action;
action.sa_handler = requestHandler;
sigemptyset(&(action.sa_mask));
action.sa_flags = SA_RESETHAND | SA_RESTART;
sigaction(SIGIO, &action, NULL);
if(!makeFifo(FIFO_READ, 0644))
exit(1);
int rd_fifo = openFifo(FIFO_READ, O_RDONLY | O_NONBLOCK); // non blocking
if(rd_fifo == -1)
exit(1);
// wait for request and answer
while (1) {
qWarning() << "waiting client...";
sleep(1);
QString msg = readFifo(rd_fifo);
qWarning() << "msg = " << msg;
if(msg == "ReqMode") {
int wr_fifo = openFifo(FIFO_WRITE, O_WRONLY); // blocking
writeFifo(wr_fifo, QString("mode"));
break;
} else
qWarning() << "unknow request ..";
}
close(rd_fifo);
unlink(FIFO_READ);
return 0;
}
Client :
int main(int argc, char *argv[])
{
int wr_fifo = openFifo(FIFO_WRITE, O_WRONLY);
if(wr_fifo == -1)
exit(1);
// create a fifo to read server answer
if(!makeFifo(FIFO_READ, 0644))
exit(1);
// ask the server his mode
writeFifo(wr_fifo, QString("ReqMode"));
// read his answer and print it
int rd_fifo = openFifo(FIFO_READ, O_RDONLY); // blocking
qWarning() << "server is in mode : " << readFifo(rd_fifo);
close(rd_fifo);
unlink(FIFO_READ);
return 0;
}
Everything works as expected ( even if all errors are not properly handled, this is just a sample code to demonstrate that this is possible).
The problem is that the handler ( not shown here, but it only print a message on the terminal with the signal received ) is never called when the client write datas to the fifo. Beside, i have check that if i send a kill -SIGIO to the server from a bash ( or from elsewhere ) the signal handler is executed.
Thanks for your help.
Actually, i missed the 3 following lines on the server side :
fcntl(rd_fifo, F_SETOWN, getpid()); // set PID of the receiving process
fcntl(rd_fifo, F_SETFL, fcntl(rd_fifo, F_GETFL) | O_ASYNC); // enable asynchronous beahviour
fcntl(rd_fifo, F_SETSIG, SIGIO); // set the signal that is sent when the kernel tell us that there is a read/write on the fifo.
The last point was important because the default signal sent was 0 in my case, so i had to set it explicity to SIGIO to make things works. Here is the output of the server side :
waiting client...
nb_read = 0
msg = ""
unknow request ..
waiting client...
signal 29
SIGPOLL
nb_read = 7
msg = "ReqMode"
Now, i guess it's possible to handle the request inside the handler by moving what is inside the while loop into the requestHandler function.
Maybe this is stupid question, actually it's appeal, or Qt is just to complicated for me.
Here's the thing:
I'm used to java when writing client-server application, and it's very simple. I would like to do same things in C++ (I'm very familiar with C++ itself), and I choose to learn Qt. I tried to write some applications in qt, but with partial success.
First thing that bothers me is signals and slots. I know how to use them in GUI programming but it confuses me with networking. And there's problem with blocking. When I call BufferedReader's readLine() method in java it blocks until it receives line from socket connection. In Qt I must make sure that there is line available every time, and handle it when there isn't one.
And when I connect QSocket's error signal to some of my custom slots, the signal is emitted when server sends last line and closes the connection, and in client's slot/function that reads I never read that last line. That are some problems I faced so far.
Slots and checking if there is data available makes me confused when I had to implements even the simplest protocols.
Important part:
I tried to find good example on the internet, but problem is that all examples are to complicated an big. Is there anyone how can show me how to write simple client-server application. Server accepts only one client. Client sends textual line containing command. If command is "ADD" or "SUB", server sends "SUP" indicating that command is supported. Otherwise it sends "UNS" and closes the connection. If client receives "SUP" it sends to more lines containing numbers to be subtracted or added. Server responds with result and closes connection.
I know that C++ requires more coding, but in Java this would take only 5 minutes, so it shouldn't take to long to write it in C++ either.
I'm sure this example would be very valuable to anyone who wants to learn networking in Qt.
edit:
This is my try to make the application (described above):
here is the server part:
#ifndef TASK_H
#define TASK_H
#include <QObject>
#include <QTcpServer>
class Task : public QObject
{
Q_OBJECT
public:
Task(QObject *parent = 0) : QObject(parent) {}
public slots:
void run();
void on_newConnection();
void on_error(QAbstractSocket::SocketError);
signals:
void finished();
private:
QTcpServer server;
};
#endif // TASK_H
void Task::run()
{
connect(&server,SIGNAL(newConnection()),this,SLOT(on_newConnection()));
connect(&server,SIGNAL(acceptError(QAbstractSocket::SocketError)),this,SLOT(on_error(QAbstractSocket::SocketError)));
if(server.listen(QHostAddress::LocalHost, 9000)){
qDebug() << "listening";
}else{
qDebug() << "cannot listen";
qDebug() << server.errorString();
}
}
void Task::on_newConnection(){
std::cout << "handeling new connection...\n";
QTcpSocket* socket = server.nextPendingConnection();
QTextStream tstream(socket);
while(!socket->canReadLine()){
socket->waitForReadyRead((-1));
}
QString operation = tstream.readLine();
qDebug() << "dbg:" << operation;
if(operation != "ADD" && operation != "SUB"){
tstream << "UNS\n";
tstream.flush();
socket->disconnect();
return;
}
tstream << "SUP\n";
tstream.flush();
double op1,op2;
while(!socket->canReadLine()){
socket->waitForReadyRead((-1));
}
op1 = socket->readLine().trimmed().toDouble();
qDebug() << "op1:" << op1;
while(!socket->canReadLine()){
socket->waitForReadyRead(-1);
}
op2 = socket->readLine().trimmed().toDouble();
qDebug() << "op2:" << op2;
double r;
if(operation == "ADD"){
r = op1 + op2;
}else{
r = op1 - op2;
}
tstream << r << "\n";
tstream.flush();
qDebug() << "result is: " << r;
socket->disconnect();
}
void Task::on_error(QAbstractSocket::SocketError ){
qDebug() << "server error";
server.close();
}
This is client side (header is similar to server's so I wont post it):
void Task::run()
{
QTcpSocket socket;
std::string temp;
socket.connectToHost(QHostAddress::LocalHost,9000);
if(socket.waitForConnected(-1))
qDebug() << "connected";
else {
qDebug() << "cannot connect";
return;
}
QTextStream tstream(&socket);
QString op;
std::cout << "operation: ";
std::cin >> temp;
op = temp.c_str();
tstream << op << "\n";
tstream.flush();
qDebug() << "dbg:" << op << "\n";
while(!socket.canReadLine()){
socket.waitForReadyRead(-1);
}
QString response = tstream.readLine();
qDebug() << "dbg:" << response;
if(response == "SUP"){
std::cout << "operand 1: ";
std::cin >> temp;
op = temp.c_str();
tstream << op + "\n";
std::cout << "operand 2: ";
std::cin >> temp;
op = temp.c_str();
tstream << op + "\n";
tstream.flush();
while(!socket.canReadLine()){
socket.waitForReadyRead(-1);
}
QString result = tstream.readLine();
std::cout << qPrintable("result is: " + result);
}else if(response == "UNS"){
std::cout << "unsupported operatoion.";
}else{
std::cout << "unknown error.";
}
emit finished();
}
What I could do better?
What are some good practices in similar situations?
When using blocking (not signal/slot mechanism), what is the best way to handle event when other side closes the connection?
Can someone rewrite this to make it look more professional (I just what to see how it supposed to look like, because I think that my solution is far from perfect) ?
Can someone rewrite this using signals and slots?
Thanks you.
Sorry for my English, and probably stupidity :)
Networking with Qt is not that difficult.
Communication between two points is handled by a single class; in the case of TCP/IP, that would be the QTcpSocket class. Both the client and server will communicate with a QTcpSocket object.
The only difference with the server is that you start with a QTcpServer object and call listen() to await a connection...
QTcpServer* m_pTcpServer = new QTcpServer
//create the address that the server will listen on
QHostAddress addr(QHostAddress::LocalHost); // assuming local host (127.0.0.1)
// start listening
bool bListening = m_pServer->listen(addr, _PORT); //_PORT defined as whatever port you want to use
When the server receives a connection from a client QTcpSocket, it will notify you with a newConnection signal, so assuming you've made a connection to a socket in your own class to receive that signal, we can get the server QTcpSocket object to communicate with the client...
QTcpSocket* pServerSocket = m_pServer->nextPendingConnection();
The server will receive a QTcpSocket object for each connection made. The server socket can now be used to send data to a client socket, using the a write method...
pServerSocket->write("Hello!");
When a socket (either client or server) receives data, it emits the readyRead signal. So, assuming you have made a connection to the readyRead signal for the socket, a slot function can retrieve the data...
QString msg = pSocket->readAll();
The other code you'll need is to handle the connect, disconnect and error signals, which you should connect relevant slots for receiving these notifications.
Ensure you only send data when you know the connection has been made. Normally, I would have the server receive a connection and send a 'hello' message back to the client. Once the client receives the message, it knows it can send to the server.
When either side disconnects, the remaining side will receive the disconnect signal and can act appropriately.
As for the client, it will just have one QTcpSocket object and after calling connectToHost, you will either receive a connected signal if the connection was succesfully made, or the error signal.
Finally, you can use QLocalServer and QLocalSocket in the same way, if you're just trying to communicate between processes on the same machine.
I started to program client/server applications in J2ME recently.Now I'm working with c++ builder 2010 indy components (e.g. TidTTCPServer) and J2ME. My application is designed to restart the kerio winroute firewall service from a remote machine.
My server application is written in c++ builder 2010, I've put a TidTCTServer component into a form which binded to 127.0.0.1:4500. That's listening on port 4500 in local machine.
Then i've added a listbox that i need to add every upcoming packets converted to UnicodeString.
//void __fastcall TForm1::servExecute(TIdContext *AContext)
UnicodeString s;
UnicodeString txt;
txt=Trim(AContext->Connection->IOHandler->ReadLn());
otvet->Items->Add(txt);
otvet->ItemIndex=otvet->Items->Count-1;
if (txt=="1") {
AContext->Connection->IOHandler->WriteLn("Suhrob");
AContext->Connection->Disconnect();
}
if (txt=="2") {
AContext->Connection->IOHandler->WriteLn("Shodi");
AContext->Connection->Disconnect();
}
//----------------------------------------------------------------------------
// void __fastcall TForm1::servConnect(TIdContext *AContext)
++counter;
status->Panels->Items[0]->Text="Connections:" + IntToStr(counter);
status->Panels->Items[1]->Text="Connected to " + AContext->Connection->Socket->Binding->PeerIP + ":" + AContext->Connection->Socket->Binding->PeerPort;
and my client side code looks smth like this:
else if (command == send) {
// write pre-action user code here
InputStream is=null;
OutputStream os=null;
SocketConnection client=null;
ServerSocketConnection server=null;
try {
server = (ServerSocketConnection) Connector.open("socket://"+IP.getString()+":"+PORT.getString());
// wait for a connection
client = (SocketConnection) Connector.open("socket://"+IP.getString()+":"+PORT.getString());
// set application-specific options on the socket. Call setSocketOption to set other options
client.setSocketOption(SocketConnection.DELAY, 0);
client.setSocketOption(SocketConnection.KEEPALIVE, 0);
is = client.openInputStream();
os = client.openOutputStream();
// send something to server
os.write("texttosend".getBytes());
// read server response
int c = 0;
while((c = is.read()) != -1) {
// do something with the response
System.out.println((char)c);
}
// close streams and connection
}
catch( ConnectionNotFoundException error )
{
Alert alert = new Alert(
"Error", "Not responding!", null, null);
alert.setTimeout(Alert.FOREVER);
alert.setType(AlertType.ERROR);
switchDisplayable(alert, list);
}
catch (IOException e)
{
Alert alert = new Alert("ERror", e.toString(), null, null);
alert.setTimeout(Alert.FOREVER);
alert.setType(AlertType.ERROR);
switchDisplayable(alert, list);
e.printStackTrace();
}
finally {
if (is != null) {
try {
is.close();
} catch (Exception ex) {
System.out.println("Failed to close is!");
}
try {
os.close();
} catch (Exception ex) {
System.out.println("Failed to close os!");
}
}
if (server != null) {
try {
server.close();
} catch (Exception ex) {
System.out.println("Failed to close server!");
}
}
if (client != null) {
try {
client.close();
} catch (Exception ex) {
System.out.println("Failed to close client!");
}
}
}
my client application gets connected with the server but when i try to send data such as
os.write("texttosend".getBytes());
I cannot get text data on the server using. That's I am not getting sent packets in the server from client.
txt=Trim(AContext->Connection->IOHandler->ReadLn());
Guys, where am I wrong? is the way i'm doing is ok?
Or do I need to use StreamConnection instead of SocketConnection?
And when i use telnet to send data it works cool, strings will be added to listbox
telnet 127.0.0.1 4500
texttosend
23
asf
Any help is appreciated !!!
Thanks in advance!
The main problem is that you are using ReadLn() on the server end. ReadLn() does not exit until a data terminator is encountered (a LF line break character is the default terminator) or if a reading timeout occurs (Indy uses infinite timeouts by default). Your J2ME code is not sending any data terminator, so there is nothing to tell ReadLn() when to stop reading. The reason it works with Telnet is because it does send line break characters.
The other problem with your code is that TIdTCPServer is a multi-threaded component, but your code is updating the UI components in a thread-unsafe manner. You MUST synchronize with the main thread, such as by using Indy's TIdSync and/or TIdNotify classes, in order to update your UI safely from inside of the server's event handlers.
Yes, flush method is necessary to call after sending bytes, but ..... finally....
then i tried to include my connection code in a new thread that implements Runnable worked perfectly. Now I've found where I was wrong!!!!!!!!!!!!!!
That's guys you need to include above code in the following block.
Thread t= new Thread(this);
t.start();
public void run()
{
//here paste the code
}
Try OutputStream.flush()?
If not, try writing to a known working server, instead of one you've created yourself (something like writing "HELO" to an SMTP server), this will help you figure out which end the error is at.