I'm using ADO's Command object to execute simple commands.
For example,
_CommandPtr CommPtr;
CommPtr.CreateInstance(__uuidof(Command));
CommPtr->ActiveConnection = MY_CONNECTION;
CommPtr->CommandType = adCmdText;
CommPtr->CommandText = L"insert into MY_TABLE values MY_VALUE";
for (int i=0; i<10000; i++) {
CommPtr->Execute(NULL, NULL, adExecuteNoRecords);
}
This works fine, yet I wanted to make this an asynchronus execution to enhance performance when dealing with large amount of data... So I just simply changed the Execute Option to adAsyncExecute..
(Documentation Link)
_CommandPtr CommPtr;
CommPtr.CreateInstance(__uuidof(Command));
CommPtr->ActiveConnection = MY_CONNECTION;
CommPtr->CommandType = adCmdText;
CommPtr->CommandText = L"insert into MY_TABLE values MY_VALUE";
for (int i=0; i<10000; i++) {
CommPtr->Execute(NULL, NULL, adAsyncExecute);
}
This gives me a memory error for some reason..
First-change exception
Microsoft C++ exception:
_com_error at memory location 0x0028FA24
Any experts on ADO know why this is happening..?
Thanks
First, I would not bother ask why you need to loop 10K times just to execute the query as it would take you tremendous network, client and server resources.
I will answer how to use the Asynchronous way of executing queries.
You use this style of query execution to prevent your client from having a stuck-up GUI.
To the user, your App has hanged while it is waiting for the reply of your query. Your App cannot do anything. It looks frozen while waiting for a reply from the Database Server.
To implement a nice GUI with animated Hourglass to work, just like below:
you need to execute the query in Asynch mode.
Example below written in Visual Basic:
dbCon.Execute "Insert Into ...Values....",,adAsyncExecute
Do While dbCon.State = (ADODB.ObjectStateEnum.adStateExecuting + ADODB.ObjectStateEnum.adStateOpen)
Application.DoEvents
Loop
This way, the client will continue to wait and permit the GUI to do some events while it is waiting for the reply of the server making your App more responsive.
Related
Good morning everyone, newcomer writing his first question here (and new to C++/OOP).
So, i'm currently working on a project in which i have to send 2 types of JSON payloads to a MQTT broker after regular intervals (which can be set by sending a message to the Arduino MKR NB 1500).
I'm currently using these libraries: ArduinoJson, StreamUtils to generate and serialize/deserialize the JSONs, PubSubClient to publish/receive, and MKRNB in my VS code workplace.
What I noticed is that my program runs fine for a couple of publishes/receives and then stays stuck in the serialization function: I tried to trace with the serial monitor to see exactly where, but eventually arrived at a point in which my knowledge in C++ is too weak to recognize where to even put the traces in the code...
Let me show a small piece of the code:
DynamicJsonDocument coffeeDoc(12288); //i need a big JSON document (it's generated right before
//the transmission and destroyed right after)
coffeeDoc["device"]["id"] = boardID
JsonObject transaction = coffeeDoc.createNestedObject("transaction");
transaction["id"] = j; //j was defined as int, it's the number of the message sent
JsonArray transaction_data = transaction.createNestedArray("data");
for(int i = 0; i < total; i++){ //this loop generates the objects in the JSON
transaction_data[i] = coffeeDoc.createNestedObject();
transaction_data[i]["id"] = i;
transaction_data[i]["ts"] = coffeeInfo[i].ts;
transaction_data[i]["pwr"] = String(coffeeInfo[i].pwr,1);
transaction_data[i]["t1"] = String(coffeeInfo[i].t1,1);
transaction_data[i]["t2"] = String(coffeeInfo[i].t2,1);
transaction_data[i]["extruder"] = coffeeInfo[i].extruder;
transaction_data[i]["time"] = coffeeInfo[i].time;
}
client.beginPublish("device/coffee", measureJson(coffeeDoc), false);
BufferingPrint bufferedClient{client, 32};
serializeJson(coffeeDoc, bufferedClient); //THE PROGRAM STOPS IN THIS AND NEVER COMES OUT
bufferedClient.flush();
client.endPublish();
j++;
coffeeDoc.clear(); //destroy the JSON document so it can be reused
The same code works as it should if i use an Arduino MKR WiFi 1010, I think i know too little about how the GSM works.
What am I doing wrong? (Again, it works about twice before getting stuck).
Thanks to everybody who will find the time to help, have a nice one!!
Well, here's a little update: turns out i ran out of memory, so 12288 bytes were too many for the poor microcontroller.
By doing some "stupid" tries, i figured 10235 bytes are good and close to the maximum available (the program won't use more than 85% of the RAM); yeah, that's pretty close to the maximum, but the requirements of the project give no other option.
Thanks even for having read this question, have a nice one!
I am doing an experiment that concurrently insert data into a MySQL table by multi-thread.
Here is partial code in C++.
bool query_thread(const char* cmd, MYSQL* con) {
if( !query( cmd, con ) ) {
return 0;
}
return 1;
}
int main() {
........
if(mysql_query(m_con, "CREATE TABLE tb1 (model INT(32), handle INT(32))") != 0) {
return 0;
}
thread thread1(query_thread, "INSERT INTO tb1 VALUES (1,1)", m_con);
thread thread2(query_thread, "INSERT INTO tb1 VALUES (2,2)", m_con);
thread thread3(query_thread, "INSERT INTO tb1 VALUES (3,3)", m_con);
thread1.join();
thread2.join();
thread3.join();
}
But the MySQL error message is issued.
error cmd: INSERT INTO tb1 VALUES (1,1)
Lost connection to MySQL server during query
Segmentation fault
My questions are as following.
Is it because the MySQL cannot accept concurrently insertion? Or bad use of multi-thread.
By multi-thread insertion as above, does it help to speed up the program? I understand the best way are multiple insert per query and LOAD DATA INFILE. But I just want to know if this way can help.
Each thread must have:
own database connection
own transaction
own cursor
This, however will not make your inserts much faster. In short, the innodb log (journal) is essentially serial which limits server total insert rate. Read mysql performance blog (percona / mariadb) for details. Certainly there are parameters to tweak and there seem to have been advances with recently.
I have two Dart apps running on Amazon (AWS Ubuntu), which are:
Self-hosted http API
Worker that handles background tasks on a timer
Both apps use PostgreSQL. They were occasionally crashing so, in addition to trying to find the root causes, I also implemented a supervisor script that just detects whether those 2 main apps are running and restarts them as needed.
Now the problem I need to solve is that the supervisor script is crashing, or the VM is crashing. It happens every few days.
I don't think it is a memory leak because if I increase the polling rate from 10s to much more often (1 ns), it correctly shows in the Dart Observatory that it exhausts 30MB and then garbage-collects and starts over at low memory usage, and keeps cycling.
I don't think it's an uncaught exception because the infinite loop is completely enclosed in try/catch.
I'm at a loss for what else to try. Is there a VM dump file that can be examined if the VM really crashed? Is there any other technique to debug the root cause? Is Dart just not stable enough to run apps for days at a time?
This is the main part of the code in the supervisor script:
///never ending function checks the state of the other processes
Future pulse() async {
while (true) {
sleep(new Duration(milliseconds: 100)); //DEBUG - was seconds:10
try {
//detect restart (as signaled from existence of restart.txt)
File f_restart = new File('restart.txt');
if (await f_restart.exists()) {
log("supervisor: restart detected");
await f_restart.delete();
await endBoth();
sleep(new Duration(seconds: 10));
}
//if restarting or either proc crashed, restart it
bool apiAlive = await isRunning('api_alive.txt', 3);
if (!apiAlive) await startApi();
bool workerAlive = await isRunning('worker_alive.txt', 8);
if (!workerAlive) await startWorker();
//if it's time to send mail, run that process
if (utcNow().isAfter(_nextMailUtc)) {
log("supervisor: starting sendmail");
Process.start('dart', [rootPath() + '/sendmail.dart'], workingDirectory: rootPath());
_nextMailUtc = utcNow().add(_mailInterval);
}
} catch (ex) {}
}
}
If you have the observatory up you can get a crash dump with:
curl localhost:<your obseratory port>/_getCrashDump
I'm not totally sure if this is related but Process.start returns a future which I don't believe will be caught by your try/catch if it completes with an error...
for(int i = 0; i < receivedACLCommands.count(); i++ )
{
QByteArray s = receivedACLCommands[i].toLatin1();
serialport->write(s);
serialport->waitForBytesWritten(1000);
}
In my method I have a QStringList that contains all my commands. The commands will be send to a PID controller that needs to process the command before a new one I being send. I tried this with the waitForBytesWriten but this isnt working for me.
*the controller is an old SCORBOT controller-a.(works with ACL commands).
Yes, waitForBytesWritten() isn't going to solve that. Only other thing you can do is sleep for a while after the wait call, thus giving the device some time to process the command you have just written. Exactly how long to sleep is of course a blind guess, it is not necessarily a constant.
Do focus on enabling handshaking first, typically ignored too often. QSerialPort::setFlowControl() function. A decent device will use its RTS signal to turn off your CTS input (Clear to Send) when it isn't ready to receive anything. CTS/RTS handshaking is supported by Qt, you use QSerialPort::HardwareControl
Every connection requires one thread for each, and for now, we're allowing only certain number of connections per period. So every time a user connects, we increment the counter if we're within certain period from the last time we set the check time.
1.get current_time = time(0)
2.if current_time is OUTSIDE certain period from check_time,
set counter = 0, and check_time = current_time.
3.(otherwise, just leave it the way it is)
4.if counter < LIMIT, counter++ and return TRUE
5.Otherwise return FALSE
But this is independent of actually how many threads we have running in the server, so I'm thinking of a way to allow connections depending on this number.
The problem is that we're actually using a third-party api for this, and we don't know exactly how long the connection will last. First I thought of creating a child thread and run ps on it to pass the result to the parent thread, but it seems like it's going to take more time since I'll have to parse the output result to get the total number of threads, etc. I'm actually not sure if I'm making any sense.. I'm using c++ by the way. Do you guys have any suggestions as to how I could implement the new checking method? It'll be very much appreciated.
There will be a /proc/[pid]/task (since Linux 2.6.0-test6) directory for every thread belonging to process [pid]. Look at man proc for documentation. Assuming you know the pid of your thread pool you could just count those directories.
You could use boost::filesystem to do that from c++, as described here:
How do I count the number of files in a directory using boost::filesystem?
I assumed you are using Linux.
Okay, if you know the TID of the thread in use by the connection then you can wait on that object in a separate thread which can then decrement the counter.
At least I know that you can do it with MSVC...
bool createConnection()
{
if( ConnectionMonitor::connectionsMaxed() )
{
LOG( "Connection Request failed due to over-subscription" );
return false;
}
ConnectionThread& connectionThread = ThreadFactory::createNewConnectionThread();
connectionThread.startConnection();
ThreadMonitorThread& monitor = ThreadFactory::createThreadMonitor(connectionThread);
monitor.monitor();
}
and in ThreadMonitorThread
ThreadMonitorThread( const Thread& thread )
{
this.thread = thread;
}
void monitor()
{
WaitForSingleObject( thread.getTid() );
ConnectionMonitor::decrementThreadCounter();
}
Of course ThreadMonitorThread will require some special privileges to call the decrement and the ThreadFactory will probably need the same to increment it.
You also need to worry about properly coding this up... who owns the objects and what about exceptions and errors etc...