C++ console input blocks so i can't kill thread - c++

My program has many different threads handling different things, and one of them deals with user input.
The other threads don't have much in the way of blocking calls, and those that do block are network based so will be interrupted or return gracefully when the socket is shut down.
However the user thread has calls to std::cin in order to grab the user input. The effect this has is while all the other threads are dead the user thread is still blocking on user input, and will only die the next time input is given.
Is there any way for me to check if there is any user input to grab before blocking?
I understand cin.peek() exists but from my experience, it blocks if there is nothing to read in. Assuming I'm using it correctly
My code is basically an infinite loop that stops when another thread switches the condition variable:
void doLoop()
{
while (running) //running is shared between all threads and all others die quickly when it is false. It's set to true before the threads are started
{
string input = "";
getline(cin, input);
//Handle Input
}
}
I'm on windows, using VS2013, and cannot use external libraries. I'm using windows.h and std throughout.

I believe that the C++ Standard does not offer a way of checking the standard input without blocking. Since you are willing to use platform specific functions, 'kbhit()' might suit your needs but it has been deprecated in Windows. An alternative is offered, _kbhit(). Of course this is not portable to other platforms.
This is the link to MSDN: _kbhit

What you could do is using futures to allow the user to input something with a time limit. You can then insert this code into your main loop
#include <iostream> // std::cout
#include <future> // std::async, std::future
#include <chrono> // std::chrono::milliseconds
#include <string>
using namespace std;
bool myAsyncGetline(string & result)
{
std::cout<<"Enter something within the time limit"<<endl;
getline(cin,result);
return true;
}
int main()
{
// call function asynchronously:
string res;
std::future<bool> fut = std::async (myAsyncGetline,res);
std::chrono::seconds span (20);
if (fut.wait_for(span)==std::future_status::timeout)
std::cout << "Too Late!";
else
cout<<"You entered "<<res<<" "<< endl;
return 0;
}
This is available in VS2012 so you should be able to reproduce it.
The output is "Tool Late!" if getline is still working after the timeout (20s), otherwise it outputs the result.
I think that it is simpler than messing around with killing thread as the function stop by itself if the time limit is hit.
Tell me if you need help integrating it into your existing code I can assist.

Related

Losing messages when multiprocess logging with easylogging++ in C++

I'm using easylogging++ in my app to log messages for control and I've noticed that in production env (which runs under Linux) some messages were disappearing or missing from the log files. I managed to simulate this problem with a simple example in the test environment (on Windows). I made an infinite thread that just keeps on logging a counter and then I executed two instances of my program, here is a resumed example of my code:
#include "Log/Log.h"
#include <chrono>
#include <thread>
INITIALIZE_EASYLOGGINGPP
void log_test() {
long int count = 0;
while (true) {
log_info("Logando..." + std::to_string(count)); // this is defined in Log.h
boost::this_thread::sleep(boost::posix_time::milliseconds(10));
count++;
}
}
int main(){
std::thread t(log_test);
t.detach();
// rest of the code
}
and Log.h/Log.cpp are:
#pragma once
#include "easylogging++.h"
#include <mutex>
static std::mutex mtx;
void log_info(std::string s);
void log_error(std::string s);
Log.cpp:
#include "Log.h"
void log_info(std::string s)
{
mtx.lock();
LOG(INFO, ELPP_THREAD_SAFE) << s;
mtx.unlock();
}
void log_error(std::string s)
{
mtx.lock();
LOG(ERROR, ELPP_THREAD_SAFE) << s;
mtx.unlock();
}
and the both executable files are using the same .conf file with the following configurations:
* GLOBAL:
FORMAT = "%datetime %msg"
FILENAME = "C:/logs/%datetime{%Y-%M-%d}/msgs.log"
ENABLED = true
TO_FILE = true
TO_STANDARD_OUTPUT = false
SUBSECOND_PRECISION = 6
PERFORMANCE_TRACKING = true
MAX_LOG_FILE_SIZE = 2097152 ## 2MB - Comment starts with two hashes (##)
in the msgs.log file I've noticed this sample:
2022-06-22 18:51:24,886631 Logando...288
2022-06-22 18:51:24,901856 Logando...289
2022-06-22 18:51:24,917820 Logando...5
2022-06-22 18:51:24,932827 Logando...291
2022-06-22 18:51:24,948248 Logando...292
Where the log 290 is missing from the first process and there's just this blank line instead. I guess that one solution could be just using different log files for each process, however it doesn't happen in one single process with multiple threads (instantiating thread t1, t2,t3 as in the code example before). I can't just change one log file to each process in production at the moment since it will have a high impact, so how can I solve it to I don't lose any message at all? Thanks in advance!
I guess that one solution could be just using different log files for each process, however it doesn't happen in one single process with multiple threads (instantiating thread t1, t2,t3 as in the code example before)
Well, threads in a single process share an instance of std::mutex mtx, so they're properly synchronized. Perhaps more importantly, the thing being correctly synchronized is access to a single buffer, which is the only buffer writing to your file.
Two processes will have completely independent instances of std::mutex mtx, which doesn't matter if they're single-threaded, because only one thread is writing to each process's buffer. The problem is that the two buffers are not synchronized with each other when writing to the file, and as mentioned in comments, these writes are apparently not atomic appends.
Solutions are:
Just use threads, since this works already.
Use a shared mutex - this is generally platform specific, but Boost.Interprocess is a good place to start.
Use two files - have some other process tail them both and combine into a single file if you need that
Use two FIFOs for the output files, and have some other process reading them both and combining them into a single file. It avoids duplicating the file storage on disk, but is probably *NIX specific.
Use network sinks (see the easyloggingc++ documentation), and have a process listening to two ports on localhost ... and combining them into a single file. More portable than FIFOs, but also more coding.

std::atexit() doesn't work if std::cin has been used in the code

Couple days ago it was that I've started working with C++, so quite an amateur and stupid question probably:
Function provided in std::atexit() gets called when my application closes. Very happy. But as soon as I have used std::cin to get input from the user side (at least once), that function doesn't get called.
This is a console application.
Edit: I would like to still execute my void Close() function whilst the program is waiting for input. But I can only achieve that if my program is not waiting for input. I reckon this might not be the correct way of doing this, my program starts Apache web server and Mysql database server (with ShellExecuteEx()) and when my application stops I'm stopping them as well.
I have tried std::set_terminate() as well.
// This is super good, everything works as expected
#include <iostream>
#include <string>
void Close()
{
// Code I need to run when program closes.
}
int main()
{
std::atexit(Close);
// Some code...
return 0;
}
However, if I wait for a user input, 'void Close()' won't be called.
#include <iostream>
#include <string>
void Close()
{
// Code I need to run when program closes.
}
int main()
{
std::atexit(Close);
std::string example;
std::cin >> example;
return 0;
}
However, if I wait for a user input, void Close() won't be called.
Indeed, but that's desired. When waiting for user input, your program is not yet about to shut down. As soon as the user does give an input that will be parsed into the example string object, it is - and this is when Close() is being called. Hence, when executing, type some characters, hit enter, this triggers Close().

Should C++ file read be slower than Ruby or C#?

Completely new to C++.
I'm comparing various aspects of C++, C# and Ruby to see if there's need for mirroring a library. Currently, simple read of a file (post update).
Compiling C++ and C# in VS 2017. C++ is in release(x64) mode (or at least compile then run)
The libraries more or less read a file and split the lines into three which make up the members of an object which are then stored in an array member.
For stress testing I tried a large file 380MB(7M lines) (after update) now getting similar performance with C++ and Ruby,
Purely reading the file and doing nothing else the performance is as below:
Ruby: 7s
C#: 2.5s
C++: 500+s (stopped running after awhile, something's clearly wrong)
C++(release build x64): 7.5s
The code:
#Ruby
file = File.open "test_file.txt"
while !file.eof
line = file.readline
end
//C#
StreamReader file = new StreamReader("test_file.txt");
file.Open();
while((line = file.ReadLine()) != null){
}
//C++
#include "stdafx.h"
#include "string"
#include "iostream"
#include "ctime"
#include "fstream"
int main()
{
std::ios::sync_with_stdio(false);
std::ifstream file;
file.open("c:/sandboxCPP/test_file.txt");
std::string line;
std::clock_t start;
double duration;
start = std::clock();
while (std::getline(file, line)) {
}
duration = (std::clock() - start) / (double)CLOCKS_PER_SEC;
std::cout << "\nDuration: " << duration;
while (true)
{
}
return 0;
}
Edit: The following performed incredibly well. 0.03s
vector<string> lines;
string tempString = str.str();
boost::split(lines, tempString, boost::is_any_of("\n"));
start = clock();
cout << "\nCount: " << lines.size();
int count = lines.size();
string s;
for (int i = 0; i < count; i++) {
s = lines[i];
}
s = on the likelihood that I don't know what boost is doing. Changed performance.
Tested with a cout of a random record at the end of the loop.
Thanks
Based on the comments and the originally posted code (it has now been fixed [now deleted]) there was previously a coding error (i++ missing) that stopped the C++ program from outputting anything. This plus the while(true) loop in the complete code sample would present symptoms consistent with those stated in the question (i.e. user waits 500s sees no output and force terminates the program). This is because it would complete reading the file without outputting anything and enter into the deliberately added infinite loop.
The revised complete source code correctly completes (according to the comments) in ~1.6s for a 1.2 million file. My advice for improving performance would be as follows:
Make sure you are compiling in release mode (not debug mode). Given the user has specified they are using Visual Studio 2017, I would recommend viewing the official Microsoft documentation (https://msdn.microsoft.com/en-us/library/wx0123s5.aspx) for a thorough explanation.
To make it easier to diagnose problems do not add an infinite loop at the end of your program. Instead run the executable from powershell / (cmd) and confirm that it terminates correctly.
EDIT: I would also add:
For accurate timings you also need to take into account the OS disk cache. Run each benchmark multiple times to 'warm-up' the disk cache.
C++ doesn’t automatically write everything the instant you tell it to. Instead, it buffers the data so it can write it all at once, which is usually faster. To say “I really want to write this now.”, you need to say something like std::cout << std::flush (if you use std::endl to end your lines it does this automatically).
Usually you don’t need to do this; the buffers are flushed when the program exits, or when you ask for input from the user, or things like that. However, your program doesn’t exit, so it never flushes its buffer. You read the input, and then the program is executing while(true) forever, never giving the output.
The solution to this is simple: remove the while loop at the end of the program. You should not have that; people usually assume a console program exits when it’s finished. I would’ve guessed you had that because Visual Studio automatically closed the console window when the program was finished, but apparently it doesn’t do that with Ctrl+F5, which you use, so I’m not sure.

Interrupt running program and save data

How to design a C/C++ program so that it can save some data after receiving interrupt signal.
I have a long running program that I might need to kill (say, by pressing Ctrl-C) before it finished running. When killed (as opposed to running to conclusion) the program should be able to save some variables to disk. I have several big Linux books, but not very sure where to start. A cookbook recipe would be very helpful.
Thank you.!
to do that, you need to make your program watch something, for example a global variable, that will tell him to stop what it is doing.
For example, supposing your long-running program execute a loop, you can do that :
g_shouldAbort = 0;
while(!finished)
{
// (do some computing)
if (g_shouldAbort)
{
// save variables and stuff
break; // exit the loop
}
}
with g_shouldAbort defined as a global volatile variable, like that :
static volatile int g_shouldAbort = 0;
(It is very important to declare it "volatile", or else the compiler, seeing that no one write it in the loop, may consider that if (g_shouldAbort) will always be false and optimize it away.)
then, using for example the signal API that other users suggested, you can do that :
void signal_handler(int sig_code)
{
if (sig_code == SIGUSR1) // user-defined signal 1
g_shouldAbort = 1;
}
(you need to register this handler of course, cf. here.
signal(SIGUSR, signal_handler);
Then, when you "send" the SIGUSR1 signal to your program (with the kill command for example), g_shouldAbort will be set to 1 and your program will stop its computing.
Hope this help !
NOTE : this technique is easy but crude. Using signals and global variables makes it difficult to use multiple threads of course, as other users have outlined.
What you want to do isn't trivial. You can start by installing a signal handler for SIGINT (C-c) using signal or sigaction but then the hard part starts.
The main problem is that in a signal handler you can only call async-signal-safe functions (or reentrant functions). Most library function can't be reliably considered reentrant. For instance, stdio functions, malloc, free and many others aren't reentrant.
So how do you handle this ? Set a flag in you handler (set some global variable done to 1) and look out for EINTR errors. It should be safe to do the cleanup outside the handler.
What you are trying to do falls under the rubric of checkpoint/restart.
There's several big problems with using a signal-driven scheme for checkpoint/restart. One is that signal handlers have to be very compact and very primitive. You cannot write the checkpoint inside your signal handler. Another problem is that your program can be anywhere in its execution state when the signal is sent. That random location almost certainly is not a safe point from which a checkpoint can be dropped. Yet another problem is that you need to outfit your program with some application-side checkpoint/restart capability.
Rather than rolling your own checkpoint/restart capability, I suggest you look into using a free one that already exists. gdb on linux provides a checkpoint/restart capability. Another is DMTCP, see http://dmtcp.sourceforge.net/index.html .
Use signal(2) or sigaction(2) to assign a function pointer to the SIGINT signal, and do your cleanups there.
Make your you enter only once in your save function
// somewhere in main
signal( SIGTERM, signalHandler );
signal( SIGINT, signalHandler );
void saveMyData()
{
// save some data here
}
void signalHandler( int signalNumber )
{
static pthread_once_t semaphore = PTHREAD_ONCE_INIT;
std::cout << "signal " << signalNumber << " received." << std::endl;
pthread_once( & semaphore, saveMyData );
}
If your process get 2 or more signals before you finish writing your file you'll save weird data

Deleting And Reconstructing Singleton in C++

I have an application which runs on a controlling hardware connected with different sensors. On loading the application, it checks the individual sensors one by one to see whether there is proper communication with the sensor according to predefined protocol or not.
Now, I have implemented the code for checking the individual sensor communication as a singleton thread and following is the run function, it used select system call and pipe for interprocess communication to signal the end of thread.
void SensorClass::run()
{
mFdWind=mPort->GetFileDescriptor();
fd_set readfs;
int max_fd = (mFdWind > gPipeFdWind[0] ? mFdWind : gPipeFdWind[0]) + 1;
int res;
mFrameCorrect=false;
qDebug("BEFORE WHILE");
while(true)
{
qDebug("\n IN WHILE LOOP");
usleep(50);
FD_ZERO(&readfs);
FD_SET(mFdWind,&readfs);
FD_SET(gPipeFdWind[0],&readfs);
res=select(max_fd,&readfs,NULL,NULL,NULL);
if(res < 0)
perror("Select Failed");
else if(res == 0)
puts("TIMEOUT");
else
{
if(FD_ISSET(mFdWind,&readfs))
{
puts("*************** RECEIVED DATA ****************");
mFrameCorrect=false;
FlushBuf();
//int n=mPort->ReadPort(mBuf,100);
int n=mPort->ReadPort(mBuf,100);
if(n>0)
{
Count++;
QString str((const char*)mBuf);
//qDebug("\n %s",qPrintable(str));
//See if the Header of the frame is valid
if(IsHeaderValid(str))
{
if( (!IsCommaCountOk(str)) || (!IsChecksumOk(str,mBuf)) || (!CalculateCommaIndexes(str)) )
{
qDebug("\n not ok");
mFrameCorrect=false;
} //if frame is incorrect
else
{
qDebug("\n OK");
mFrameCorrect=true;
}//if frame is correct(checksum etc are ok)
}//else if header is ok
}//if n > 0
}//if data received FD_ISSET
if(FD_ISSET(gPipeFdWind[0],&readfs))
break;
}//end nested else res not <= 0
}//infinite loop
}
The above thread is run started from the main GUI thread. This runs fine. The problem is I have given an option to the user to retest the subsystem at will. For this I delete the singleton instance using
delete SensorClass::instance();
and then restart the singleton using
SensorClass::instace()->start();
The problem is this time the control comes out of while loop in run() function immedeately upon entering the while loop, my guess is the pipe read has again read from the write pipe which was written to the last time. I have tried to use the fflush() to clear out the I/O but no luck.
My question is
Am I thinking on the right track?
If yes then how do we clear out the pipes?
If not can anyone suggest why is the selective retest not working?
Thanks in advance..
fflush clears the output buffer. If you want to clear the input buffer, you're going to need to read the data or seek to the end.
I'm not convinced the "Singleton" pattern is appropriate. There are other ways of ensuring at most one instance for each piece of hardware. What if you later want multiple threads, each working with a different sensor?
Let's assume that you're creating this thread by inheriting from QThread (which you don't specify). From the documentation of QThread::~QThread ():
Note that deleting a QThread object will not stop the execution of the thread it represents. Deleting a running QThread (i.e. isFinished() returns false) will probably result in a program crash.
So the statement delete SensorClass::instance(); is probably a really, really bad idea. In particular, it's going to be tough making any sense of this program's behavior given this flaw. Before continuing, you might want to find a way to remove the instance and ensure that the thread goes away, too.
Another problem comes to mind. When you run delete SensorClass::instance(), you get rid of some object (on the heap, one hopes). Who tells the singleton holder that its object is gone? E.g. so that the next call to SensorClass::instance() knows it needs to allocate another instance? Is this handled properly in SensorClass::~SensorClass?
Suppose that's not a problem. That likely means that the pointer to the instance is held in a global variable (or, e.g. a class level static member). It probably doesn't matter for this situation, but is access to that member properly synchronized? I.e. is there a mutex that's locked for each access to it?
You really don't want to run your initialization in thread. That is issue number one that dramatically complicates your problem and which is the kind of thing for some reason no one points out.
Just make the initialization its own function, then have a guard variable and lock, and have everything that uses it separately initialize it when they start up.
So you're signaling by writing something to the pipe, and the pipe is only created once - i.e. reused in the later threads?
Read the signaling away from the pipe. Assuming you signal by writing a single byte, then instead of just breaking out, you'd do something like (NB, no error checking etc below):
if(FD_ISSET(gPipeFdWind[0],&readfs)) {
char c;
read(gPipeFdWind[0], &c, 1);
break;
}
There are also Qt classes for handling socket I/O, e.g. QTcpSocket, which would make the code not only cleaner, also more cross-platform. Or at least QSocketNotifier to abstract the select away.