I use QtScript in my application. Scripts are written by users.
As example script like this:
//<init test time counters>
function testIteration() {
if (<test time is out>) {
myCppObject.mySignalAllDone.disconnect(testIteration);//disconnection
return;
}
//<Adding actions to myCppObject>
}
myCppObject.mySignalAllDone.connect(testIteration);//connection
testIteration();
I want from C++ stop this script before test time passed and write function like this
void MainWindow::toolButtonStopScript_clicked(){
disconnect(&this->myCppObject);// Disconnecting everything connected to myCppObject.
this->scriptEngineThread.abortAllEvaluations();
myCppObject.stopAllActivity();// emits mySignalAllDone, that is not disconnected (why and how to do that if I don't know what connections user made?), calling testIteration(), appending activity to myCppObject and this ends only when test time passed. How to solve this?
this->guiLog.log(GUILog::log_info, tr("Execution of script is interrupted by user"), this->logLevelMsgs);
this->connectMyCppObject();//make default connections
}
How to disconnect properly?
You can disconnect individual signals and slots:
disconnect(sender0, SIGNAL(overflow()),receiver1, SLOT(handleMathError()));
source: http://www.developer.nokia.com/Community/Wiki/Understanding_Signals_and_Slot_in_Qt
To get receivers, use QObject::receivers():
http://qt-project.org/doc/qt-4.8/qobject.html#receivers
It seems you cannot get slots (you must keep track of connections by yourself):
http://qt-project.org/forums/viewthread/6820
However, there are...
...Ways to debug signals and slots:
http://samdutton.wordpress.com/2008/10/03/debugging-signals-and-slots-in-qt/
Related
I am currently implementing a program that realizes TCP communication between PC program and external devices in QT. The problem I have is more general, but I will use this one as an example.
My class hierarchy looks like this:
Program
/ \
Server_A <--> Server_B <--- External system
|
Sockets[]
/ \
Commands Confirmations
/ | \
Interf1 Interf2 Interf3
I can get a command from device (Socket), my command gets into Confirmation class, realizes any Interface job, and returns confirmation back to Socket, which sends it back to device.
The problem occurs when I want to have a command send from an external system, which I also have to confirm.
I get a message on Server_B and pass it to Server_A with information about: socket to send command to and command to realize.
I pass a command to a particular socket
Socket sends a command to Commands, as there is logic for an External System commands.
Commands prepares a message, runs logic, and sends(through socket) message to device
Socket waits for response
Socket gets the response, understands that it was a response to an external system command, and passes it back to Commands
Commands realizes its logic.
Here it would all be fine, but the next step is:
Commands need to confirm the success(or failure) to external system.
So basically, what I have to do is pass a message from Commands to Server_B this way:
Commands->Socket->Server_A->Server_B. For all these classes, I would have to create an unnecessary method just to pass this one information. Is there a way to somehow solve this problem? During my programming, it often occurs that I have to pass something to the higher layer of my class structure, and it looks redundant to realize it through additional methods that only passes information further.
I have provided a sample pseudocode for this problem:
class Program
{
ServerB serverB;
ServerA serverA;
}
class ServerB
{
void send(QString msg);
}
class ServerA
{
QVector<MySocket*> sockets;
}
class MySocket
{
Commands commands;
Confirmations confirmations;
}
class Commands
{
void doLogic();
void sendToExternalSystem(QString message); //How to realize it?
}
My program is much bigger, but I hope it will give you a clue what I am trying to achieve. The simplest solution would be to add a method void sendToExternalSystem(QString message) into Sockets, Server_A and Server_B, aswell as providing a pointer for each parent during construction (commands will have access to sockets, sockets will have access to server_a, and server_a will have access to server_b)
Finally, I came up with a solution. It was necessary to implement ExternalCommand class, which instances were created in Server_B.
In the minimal solution, it has: 1. Field QString Message, 2. Method QString getMessage(), 3. Method void finish(QString), 4. Signal void sendToExternal(QString)
When I read the message sent from the external system in Server_B, I create an instance of this class, and connect it to the Server_B send method. In my code, it looks like that:
ExternalCommand::ExternalCommand(QString message, QObject* parent) : QObject(parent)
{
this->message=message;
}
QString ExternalCommand::getMessage()
{
return this->message;
}
void finish(QString outputMessage)
{
emit sendToExternal(outputMessage);
}
void Server_B::onReadyRead()
{
QTcpSocket *socket = dynamic_cast<QTcpSocket*>(sender());
QString message = socket->readAll();
ExternalCommand* cmd = new ExternalCommand(message);
connect(cmd, &ExternalCommand::sendToExternal, socket,
[socket](QString message) {socket->write(message.toUtf8());});
}
It was also necessary to implement some type of object destruction in ExternalCommand once the command is sent, but it isn't the point of this question.
So once this is implemented, instead of the message as QString, the message is passed to the lower levels as ExternalCommand*, and once an answer is got, it is possible to send it back to the External System, by calling ExternalCommand::finish(QString outputMessage);. Of course, this is just a minimal solution for this problem.
Thanks to #MatG for pointing me to Promise/Future pattern, which was helpful in finding this solution.
I currently working on a async rest client using boost::asio::io_service.
I am trying to make the client as a some kind of service for a bigger program.
The idea is that the client will execute async http requests to a rest API, independently from the thread running the main program. So inside in the client will be another thread waiting for a request to send.
To pass the requests to the client I am using a io_service and io_service::work initialized with the io_service. I almost reused the example given on this tutorial - logger_service.hpp.
My problem is that when in the example they post a work to the service, the called handler is a simple function. In my case as I am making async calls like this
(I have done the necessary to run all the instancies of the following objects and some more in a way to be able to establish the network connection):
boost::asio::io_service io_service_;
boost::asio::io_service::work work_(io_service_); //to prevent the io_service::run() to return when there is no more work to do
boost::asio::ssl::stream<boost::asio::ip::tcp::socket> socket_(io_service_);
In the main program I am doing the following calls:
client.Connect();
...
client.Send();
client.Send();
...
Some client's pseudo code:
void MyClass::Send()
{
...
io_service_.post(boost::bind(&MyClass::AsyncSend, this);
...
}
void MyClass::AsyncSend()
{
...
boost::io_service::asio::async_write(socket, streamOutBuffer, boost::bind(&MyClass::handle_send, this));
...
}
void MyClass::handle_send()
{
boost::io_service::asio::async_read(socket, streamInBuffer, boost::bind(&MyClass::handle_read, this));
}
void MyClass::handle_read()
{
// ....treatment for the received data...
if(allDataIsReceived)
FireAnEvent(ReceivedData);
else
boost::io_service::asio::async_read(socket, streamInBuffer, boost::bind(&MyClass::handle_read, this));
}
As it is described in the documentation the 'post' method requests the io_service to invoke the given handler and return immediately. My question is, will be the nested handlers, for example the ::handle_send in the AsyncSend, called just after (when the http response is ready) when post() is used? Or the handlers will be called in another order different from the one defined by the order of post() calls ?
I am asking this question because when I call only once client->Send() the client seems to "work fine". But when I make 2 consecutive calls, as in the example above, the client cannot finish the first call and than goes to execute the second one and after some chaotic executions at the end the 2 operations fail.
Is there any way to do what I'm describing execute the whole async chain before the execution of another one.
I hope, I am clear enough with my description :)
hello Blacktempel,
Thank you for the given comment and the idea but however I am working on a project which demands using asynchronous calls.
In fact, as I am newbie with Boost my question and the example I gave weren't right in the part of the 'handle_read' function. I add now a few lines in the example in a way to be more clear in what situation I am (was).
In fact in many examples, may be all of them, who are treating the theme how to create an async client are very basic... All they just show how to chain the different handlers and the data treatment when the 'handle_read' is called is always something like "print some data on the screen" inside of this same read handler. Which, I think, is completely wrong when compared to real world problems!
No one will just print data and finish the execution of her program...! Usually once the data is received there is another treatment that has to start, for example FireAnEvent(). Influenced by the bad examples, I have done this 'FireAnEvent' inside the read handler, which, obviously is completely wrong! It is bad to do that because making the things like that, the "handle_read" might never exit or exit too late. If this handler does not finish, the io_service loop will not finish too. And if your further treatment demands once again to your async client to do something, this will start/restart (I am not sure about the details) the io_service loop. In my case I was doing several calls to the async client in this way. At the end I saw how the io_service was always started but never ended. Even after the whole treatment was ended, I never saw the io_service to stop.
So finally I let my async client to fill some global variable with the received data inside the handle_read and not to call directly another function like FireAnEvent. And I moved the call of this function (FireAnEvent) just after the io_service.run(). And it worked because after the end of the run() method I know that the loop is completely finished!
I hope my answer will help people :)
I am actually wanting to get the newest files from a ftp server. For this, I am currently using QFtp to access the server and retrieve what I need.
This is what I do (like every 3 minutes) :
Connection & authentification to the server.
list() command to list all the files.
for each file listed by the list() command I call a slot that verify if the file currently listed has not been already downloaded (I am relying on the date of the file). If the file is recent enough, I download it.
So, it works. But it it really slow because there are thousands of files on the server and each time I verify the date of each of them. Is it possible to abort the list() command for example when I find a file too old ? Or is there another smarter way to fasten the process ?
Yes, there is a way to abort the long-playing command. When you call QFtp::list() it starts execution command on Ftp server, and if command finds an entry, QFtp emits QFtp::listInfo(const QUrlInfo &) signal. You can handle that signal, and check, whether the QUrlInfo::lastModified() returned time is too old. If yes, you can call QFtp::abort() function to abort the list command's execution on the server. Here is the sample code:
Establish connection to handle the ftp signals
connect(ftp, SIGNAL(listInfo(const QUrlInfo &)),
this, SLOT(onNewEntry(const QUrlInfo &)));
Implementation of the listInfo signal handling slot:
void MyFtp::onNewEntry(const QUrlInfo &url)
{
// If url.lastModified() is less than some time
// ftp->abort();
}
I wrote a C++ program using Qt. some variables inside my algorithm are changed outside of my program and in a web page. every time the user changes the variable values in the web page I modify a pre-created SQL database.
Now I want my code to change the variables value during run time without to stop the code. there is two options :
Every n seconds check the database and retrieve the variables value -> this is not good since I have to check if database content is changed every n seconds (it might be without any change for years. Also I don't want to check if the database content is changed)
Every time the database is changed my Qt program emits a signal so by catching this signal I can refresh the variables value, This seems an optimal solution and I want to write a code for this part
The C++ part of my code is:
void Update Database()
{
QSqlDatabase db = QSqlDatabase::addDatabase("QMYSQL");
db.setHostName("localhost");
db.setDatabaseName("Mydataset");
db.setUserName("user");
db.setPassword("pass");
if(!db.open())
{
qDebug()<<"Error is: "<<db.lastError();
qFatal("Failed To Connect");
}
QSqlQuery qry;
qry.exec("SELECT * from tblsystemoptions");
QSqlRecord rec = qry.record();
int cols = rec.count();
qry.next();
MCH = qry.value(0).toString(); //some global variables used in other functions
MCh = qry.value(1).toString();
// ... this goes on ...
}
QSqlDriver supports notifications which emit a signal when a specific event has occurred. To subscribe to an event just use QSqlDriver::subscribeToNotification( const QString & name ). When an event that you’re subscribing to is posted by the database the driver will emit the notification() signal and your application can take appropriate action.
db.driver()->subscribeToNotification("someEventId");
The message can be posted automatically from a trigger or a stored procedure. The message is very lightweight: nothing more than a string containing the name of the event that occurred.
You can connect the notification(const QString&)signal to your slot like:
QObject::connect(db.driver(), SIGNAL(notification(const QString&)), this, SLOT(refreshView()));
I should note that this feature is not supported by MySQL as it does not have an event posting mechanism.
There is no such thing. The Qt event loop and the database are not connected in any way. You only fetch/alter/delete/insert/... data and that's it. Option 1 is the one you have to do. There are ways to use TRIGGER on the server side to launch external scripts but this would not help you very much.
I am facing a problem using the Poco::HTTPServer. As descibed in the doc of TCPServer:
After calling stop(), no new connections will be accepted and all
queued connections will be discarded. Already served connections,
however, will continue being served.
Every connection is executed in its own thread.
Although it seems the destructor is succesfully called the connection-thread still exists and serves connections, which leads to segmentation faults.
I want to cancel all connections. Therefore I use Poco::ThreadPool::defaultPool().stopAll(); in the destructor of my server class, which leads to the behaviour also described in the docs of ThreadPool (It takes 10 seconds and objects are not deleted):
If a thread fails to stop within 10 seconds (due to a programming
error, for example), the underlying thread object will not be deleted
and this method will return anyway. This allows for a more or less
graceful shutdown in case of a misbehaving thread.
My question is: How do I accomplish the more graceful way? Is the programming error within the Poco-library?
EDIT: I am using GNU/Linux (Ubuntu 10.04) with eclipse + cdt as IDE, target system is embedded Linux (Kernel 2.6.9). On both systems I experienced the described behaviour.
The application I am working on shall be configured via web-interface. So the server sends an event (on upload of new configuration) to main to restart.
Here's the outline:
main{
while (true){
server = new Server(...);
server->start();
// wait for termination request
server->stop();
delete server;
}
}
class Server{
Poco:HTTPServer m_Server;
Server(...):
m_Server(requestHandlerFactory, socket, params);
{
}
~Server(){
[...]
Poco::ThreadPool::defaultPool().stopAll(); // This takes 10 seconds!
// without the above line I get segmentation faults,
// because connections are still being served.
}
start() { m_Server.start(); }
stop() { m_Server.stop(); }
}
This is actually a bug in the implementation of the stopAll() method. The listening socket is being shut down after closing the currently active connections, which allows the server to accept new connections in between, which in turn will not be closed and keep running. A workaround is to call HTTPServer::stop() and then HTTPServer::stopAll(). I reported the bug upstream including a proposed fix:
https://github.com/pocoproject/poco/issues/436
You should avoid using Poco::ThreadPool::defaultPool().stopAll(); since it doesn't provide you control on which threads are stopped.
I suggest you to create a Poco::ThreadPool specifically for you Poco:HTTPServer instance and stops the threads of this pool when your server is stopped.
With this, your code should look like this:
class Server{
Poco:HTTPServer m_Server;
Poco::ThreadPool m_threadPool;
Server(...)
: m_Server(requestHandlerFactory, m_threadPool, socket, params);
{
}
~Server(){
}
start() { m_Server.start(); }
stop() {
m_Server.stop();
m_threadPool.stopAll(); // Stop and wait serving threads
}
};
This answer may be too late for the poster, but since the question helped me to solve my issue, I think it is good to post a solution here !