For a project I am working on, I need the program to be able to receive input from the user, but while they are inputting something, the program can continue in the loop.
For example:
while (true)
{
if (userInput == true)
{
cin >> input
}
//DO SOMETHING
}
This would mean that //DO SOMETHING would happen every loop, without the user pressing enter a million times.
Before, my solution was creating my own input using kbhit() and getch() from conio.h, but that got very messy, and I don't like using conio.h for portability reasons etc. Also, it doesn't need to use cin specifically, because there is a good chance it just wouldn't work with it, so any good solution that doesn't require me making my own input with a 'not very good' library, would be much appreciated.
It could be worth looking into multi-threading for this. I'm usually hesitant to suggest it, because multithreading pulls in a host of potential problems that can end up difficult to debug, but in this case they can be isolated fairly easily. I envision something like this:
#include <atomic>
#include <chrono>
#include <iostream>
#include <thread>
int main() {
std::atomic<bool> interrupted;
int x;
int i = 0;
do {
interrupted.store(false);
// create a new thread that does stuff in the background
std::thread th([&]() {
while(!interrupted) {
// do stuff. Just as an example:
std::cout << i << std::flush;
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
});
std::cin >> x;
// when input is complete, interrupt thread and wait for it to finish
interrupted.store(true);
th.join();
// apply x, then loop to make new thread to do stuff
i = x;
} while(x != -1); // or some other exit condition
}
At first glance this looks somewhat wasteful because it keeps spawning and throwing threads away, but user input takes, in computing terms, an eternity, so the overhead should not be prohibitive. More importantly, it does have the advantage of avoiding any suggestion of data races wholesale because the only means of communication between the main (input) loop and the background thread is the atomic interruption flag, and the application of x to shared data happens when no thread is running that could race the main loop.
Disclaimer: the following seems to be working with gcc on Linux, however for some reasons it does not work with VC++ on Windows. The specifications appear to give a lot of leeway to the implementations here, and VC++ definitely takes advantage of it...
There are multiple functions available on any std::basic_istream or its underlying std::basic_streambuf.
In order to know if there is any character available for input, you may call in_avail on std::basic_streambuf:
if (std::cin.rdbuf() and std::cin.rdbuf()->in_avail() >= 0) {
}
in_avail gives you the number of characters available without blocking, it returns -1 if there is no such character. Afterward, you can use the regular "formatted" read operations such as std::cin >> input.
Otherwise, for unformatted reads, you can use readsome from std::basic_istream, which returns up to N characters available without blocking:
size_t const BufferSize = 512;
char buffer[BufferSize];
if (std::cin.readsome(buffer, BufferSize) >= 1) {
}
However it is noted that the implementation of this method is highly variable, so for a portable program it might not be that useful.
Note: as mentioned in the comment, the in_avail approach might be spotty. I confirm it can work, however you first have to use an obscure feature of C++ IO streams: std::ios_base::sync_with_stdio(false) which allows C++ streams to buffer input (and thus steal it from C's stdio buffers).
It's sad that there is no simple portable way to checking asynchronously if a key was hit. But I guess that the standard committee has carefully evaluated the pros and cons.
If you don't want to rely on third party event management libraries, and if multithreading would be overkill, one alternative could be to have your own version of kbhit(), with conditional compiling for the environments you want to support:
if your conio.h supports kbhit() just use it.
for windows, you can refer to _kbhit()
for linux and posix, you can use Matthieu's answer, or look here for Morgan Mattews's code
It's not the most academic answer, but it's pragmatic.
Maybe, you can have a try:
while (true)
{
if (userInput == true)
{
if(cin >> input){
}else{
std::cout << "Invalid input!" << std::endl;
cin.clear();
}
}
//DO SOMETHING
}
Related
The following is a snippet of a larger program and is done using Pthreads.
The UpdateFunction reads from a text file. The FunctionMap is just used to output (key,1). Here essentially UpdateFunction and FunctionMap run on different threads.
queue <list<string>::iterator> mapperpool;
void *UpdaterFunction(void* fn) {
std::string *x = static_cast<std::string*>(fn);
string filename = *x;
ifstream file (filename.c_str());
string word;
list <string> letterwords[50];
char alphabet = '0';
bool times = true;
int charno=0;
while(file >> word) {
if(times) {
alphabet = *(word.begin());
times = false;
}
if (alphabet != *(word.begin())) {
alphabet = *(word.begin());
mapperpool.push(letterwords[charno].begin());
letterwords[charno].push_back("xyzzyspoon");
charno++;
}
letterwords[charno].push_back(word);
}
file.close();
cout << "UPDATER DONE!!" << endl;
pthread_exit(NULL);
}
void *FunctionMap(void *i) {
long num = (long)i;
stringstream updaterword;
string toQ;
int charno = 0;
fprintf(stderr, "Print me %ld\n", num);
sleep(1);
while (!mapperpool.empty()) {
list<string>::iterator it = mapperpool.front();
while(*it != "xyzzyspoon") {
cout << "(" << *it << ",1)" << "\n";
cout << *it << "\n";
it++;
}
mapperpool.pop();
}
pthread_exit(NULL);
}
If I add the while(!mapperpool.empty()) in the UpdateFunction then it gives me the perfect output. But when I move it back to the FunctionMap then it gives me a weird out and Segfaults later.
Output when used in UpdateFunction:
Print me 0
course
cap
class
culture
class
cap
course
course
cap
culture
concurrency
.....
[Each word in separate line]
Output when used in FunctionMap (snippet shown above):
Print me 0
UPDATER DONE!!
(course%0+�0#+�0�+�05P+�0����cap%�+�0�+�0,�05�+�0����class5P?�0
����xyzzyspoon%�+�0�+�0(+�0%P,�0,�0�,�05+�0����class%p,�0�,�0-�05�,�0����cap%�,�0�,�0X-�05�,�0����course%-�0 -�0�-�050-�0����course%-�0p-�0�-�05�-�0����cap%�-�0�-�0H.�05�-�0����culture%.�0.�0�.�05 .�0
����concurrency%P.�0`.�0�.�05p.�0����course%�.�0�.�08/�05�.�0����cap%�.�0/�0�/�05/�0Segmentation fault (core dumped)
How do I fix this issue?
list <string> letterwords[50] is local to UpdaterFunction. When UpdaterFunction finishes, all its local variables got destroyed. When FunctionMap inspects iterator, that iterator already points to deleted memory.
When you insert while(!mapperpool.empty()) UpdaterFunction waits for FunctionMap completion and letterwords stays 'alive'.
Here essentially UpdateFunction and FunctionMap run on different threads.
And since they both manipulate the same object (mapperpool) and neither of them uses either pthread_mutex nor std::mutex (C++11), you have a data race. If you have a data race, you have Undefined Behaviour and the program might do whatever it wants. Most likely it will write garbage all over memory until eventually crashing, exactly as you see.
How do I fix this issue?
By locking the mapperpool object.
Why is list not thread-safe?
Well, in vast majority of use-cases, a single list (or any other collection) won't be used by more than one thread. In significant part of the rest the lock will have to extend over more than one operation on the collection, so the client will have to do its own locking anyway. The remaining tiny percentage of cases where locking in the operations themselves would help is not worth adding the overhead for everyone; C++ key design principle is that you only pay for what you use.
The collections are only reentrant, meaning that using different instances in parallel is safe.
Note on pthreads
C++11 introduced threading library that integrates well with the language. Most notably, it uses RAII for locking of std::mutex via std::lock_guard, std::unique_lock and std::shared_lock (for reader-writer locking). Consistently using these can eliminate large class of locking bugs that otherwise take considerable time to debug.
If you can't use C++11 yet (on desktop you can, but some embedded platforms did not get a compiler update yet), you should first consider Boost.Thread as it provides the same benefits.
If you can't use even then, still try to find, or write, a simple RAII wrapper for locking like the C++11/Boost do. The basic wrapper is just a couple of lines, but it will save you a lot of debugging.
Note that C++11 and Boost also have atomic operations library that pthreads sorely miss.
Suppose I have C++ code such as
#include "myheaderfiles.h"
//..some stuff
//...some more stuff
int main()
{
double milliseconds;
int seconds;
int minutes;
int timelimit=2;
...
...
//...code here that increments
//.....milliseconds,seconds, and minutes
while(minutes <=timelimit)
{
//...do stuff
if(milliseconds>500)
{
//...do stuff
//...every half second
} //end if
} //end while
}//end main
The program will run fine and does what its supposed to do but it will use up 90%+ of my cpu.
It was suggested to me to use usleep() in my while loop ever 100ms or so since I really only care about doing stuff every 500ms anyway. That way, it hog the CPU when its not needed.
So I added it to my while loop like so
while(minutes <=timelimit)
{
//...do stuff
if(milliseconds>500)
{
//...do stuff
//...every half second
} //end if
usleep(100000);
} //end while
It compiles fine, but when I run it, the program will hang right at usleep and never return. I read somewhere that before calling usleep, one needs to flush all buffers, so I flushed all file streams and couts etc etc. Still no luck.
I've searched for 2 days for a solution. I've used sleep() too, with no luck.
I found a few alternatives but they seem complicated and will add a lot of code to my program that I dont really fully understand which will complicate it and make it messy, plus it might not work.
I never really put too much thought in my while() loops before because most of the programs I wrote were for microcontrollers or FPGAs which is no problem to hog the processor.
If anyone can help.... any resources, links,books? Thanks.
Your approach somewhat comes from the wrong end. A program should consume 90-100% CPU as long as it has something useful to do (and it should block otherwise, consuming zero CPU).
Sleeping in between will cause execution being longer for no good reason, and consume more energy than just doing the work as fast as possible (at 100% CPU) and then completely blocking until more work is available or until some other significant thing (e.g. half a second has passed, if that matters for you) happens.
With that in mind, structure your program in a way conceptually like:
while(blocking_call() != exit_condition)
{
while(have_work)
do_work();
}
Also, do not sleep during execution, but use timers (e.g. setitimer) to do something at regular intervals. Not only will this be more efficient, but also a lot more precise and reliable.
How exactly you implement this depends on how portable you want your software to be. Under Ubuntu/Linux, you can for example use APIs such as epoll_wait with eventfd rather than writing a signal handler for the timer.
This code works as expected for me (running on OSX though).
#include <unistd.h>
#include <iostream>
int main() {
std::cout << "hello" << std::endl;
int i = 0;
while(i < 10) {
++i;
usleep(100000);
std::cout << "i = " << i << std::endl;
}
std::cout << "bye" << std::endl;
return 0;
}
There is a logical issue or maybe you're making multiple counters? Since you said you've done microcontrollers, I assume you're trying to use clock-cycles as a method of counting while calling the system timers? Also, what has me questioning is if you're recommended to use usleep(x), why are you using double for millisecond? usleep(1) is 1 microsecond == 1000 milliseconds. The sleep(x) is a counter per x second, so the system will suspend it's current task for x amount of seconds.
#include <iostream>
#include <unistd.h>
using namespace std;
#define MILLISECOND 1000
#define SECOND 1000*MILLISECOND
int main(int argc, char *argv[]){
int time = 20;
int sec_counter = 0;
do{
cout<<sec_counter<<" second"<<endl;
usleep(SECOND);
sec_counter++;
} while(sec_counter<time+1);
return 0;
}
If you wanted to use 500ms then replace usleep(SECOND) with usleep(500*MILLISECOND).
I suggest you use a debugger and step through your code to see what's happening.
In the following code example, program execution never ends.
It creates a thread which waits for a global bool to be set to true before terminating. There is only one writer and one reader. I believe that the only situation that allows the loop to continue running is if the bool variable is false.
How is it possible that the bool variable ends up in an inconsistent state with just one writer?
#include <iostream>
#include <pthread.h>
#include <unistd.h>
bool done = false;
void * threadfunc1(void *) {
std::cout << "t1:start" << std::endl;
while(!done);
std::cout << "t1:done" << std::endl;
return NULL;
}
int main()
{
pthread_t threads;
pthread_create(&threads, NULL, threadfunc1, NULL);
sleep(1);
done = true;
std::cout << "done set to true" << std::endl;
pthread_exit(NULL);
return 0;
}
There's a problem in the sense that this statement in threadfunc1():
while(!done);
can be implemented by the compiler as something like:
a_register = done;
label:
if (a_register == 0) goto label;
So updates to done will never be seen.
There is really nothing that prevents the compiler from optimizing the while-loop away. Use atomic or a mutex to access the bool from more than one thread. That is the only supported and correct solution. As you are using posix, a mutex would be the right solution in this case.
And don't use volatile. There is a posix standard that states what has to work and volatile is not a solution that has a guaranty to work.
And there is an othere problem: There is no guaranty that your newly created thread every started to run, before you set the flag to false.
For such simple example volatile is enough. But for vast majority of real world situations it is not. Use conditional variable for this task. They look weird at the first glance but actually they are quite logical. On x86 bool IS atomic to read/write (for ARM, probably, not). Also there is an obstacle with vector: it is NOT a vector of bools, it is a bitfield. To write vector from several threads use vector (or bool arr[SIZE]).
Also you don't join with thread, it is wrong.
Race condition means: when two threads are accessing the same object, and at least one of them is a write.
It means you will have two types of racing, write-write conflict and write-read conflict.
Back to your code, you essentially have two threads, one is the main thread, and another one is the one you created with pthread_create.
One of them is a read: while(!done), and one of them is a write: done = true.
You have race condition for sure.
Is a race condition possible when only one thread writes to a bool variable in c++?
Yes. In your case, the main thread is also a thread (i.e. you have one thread writing and one thread reading).
How is it possible that the bool variable ends up in an inconsistent state with just one writer?
The compiler is (should be) an optimizing compiler. It will probably optimize the reading of the done variable, unless you take care to avoid that (use std::atomic<bool> done instead).
its not guaranteed that the assignment to a bool which is one byte is atomic
I attended one interview two days back. The interviewed guy was good in C++, but not in multithreading. When he asked me to write a code for multithreading of two threads, where one thread prints 1,3,5,.. and the other prints 2,4,6,.. . But, the output should be 1,2,3,4,5,.... So, I gave the below code(sudo code)
mutex_Lock LOCK;
int last=2;
int last_Value = 0;
void function_Thread_1()
{
while(1)
{
mutex_Lock(&LOCK);
if(last == 2)
{
cout << ++last_Value << endl;
last = 1;
}
mutex_Unlock(&LOCK);
}
}
void function_Thread_2()
{
while(1)
{
mutex_Lock(&LOCK);
if(last == 1)
{
cout << ++last_Value << endl;
last = 2;
}
mutex_Unlock(&LOCK);
}
}
After this, he said "these threads will work correctly even without those locks. Those locks will reduce the efficiency". My point was without the lock there will be a situation where one thread will check for(last == 1 or 2) at the same time the other thread will try to change the value to 2 or 1. So, My conclusion is that it will work without that lock, but that is not a correct/standard way. Now, I want to know who is correct and in which basis?
Without the lock, running the two functions concurrently would be undefined behaviour because there's a data race in the access of last and last_Value Moreover (though not causing UB) the printing would be unpredictable.
With the lock, the program becomes essentially single-threaded, and is probably slower than the naive single-threaded code. But that's just in the nature of the problem (i.e. to produce a serialized sequence of events).
I think the interviewer might have thought about using atomic variables.
Each instantiation and full specialization of the std::atomic template defines an atomic type. Objects of atomic types are the only C++ objects that are free from data races; that is, if one thread writes to an atomic object while another thread reads from it, the behavior is well-defined.
In addition, accesses to atomic objects may establish inter-thread synchronization and order non-atomic memory accesses as specified by std::memory_order.
[Source]
By this I mean the only thing you should change is remove the locks and change the lastvariable to std::atomic<int> last = 2; instead of int last = 2;
This should make it safe to access the last variable concurrently.
Out of curiosity I have edited your code a bit, and ran it on my Windows machine:
#include <iostream>
#include <atomic>
#include <thread>
#include <Windows.h>
std::atomic<int> last=2;
std::atomic<int> last_Value = 0;
std::atomic<bool> running = true;
void function_Thread_1()
{
while(running)
{
if(last == 2)
{
last_Value = last_Value + 1;
std::cout << last_Value << std::endl;
last = 1;
}
}
}
void function_Thread_2()
{
while(running)
{
if(last == 1)
{
last_Value = last_Value + 1;
std::cout << last_Value << std::endl;
last = 2;
}
}
}
int main()
{
std::thread a(function_Thread_1);
std::thread b(function_Thread_2);
while(last_Value != 6){}//we want to print 1 to 6
running = false;//inform threads we are about to stop
a.join();
b.join();//join
while(!GetAsyncKeyState('Q')){}//wait for 'Q' press
return 0;
}
and the output is always:
1
2
3
4
5
6
Ideone refuses to run this code (compilation errors)..
Edit: But here is a working linux version :) (thanks to soon)
The interviewer doesn't know what he is talking about. Without the locks you get races on both last and last_value. The compiler could for example reorder the assignment to last before the print and increment of last_value, which could lead to the other thread executing on stale data. Furthermore you could get interleaved output, meaning things like two numbers not being seperated by a linebreak.
Another thing, which could go wrong is that the compiler might decide not to reload last and (less importantly) last_value each iteration, since it can't (safely) change between those iterations anyways (since data races are illegal by the C++11 standard and aren't acknowledged in previous standards). This means that the code suggested by the interviewer actually has a good chance of creating infinite loops of doing absoulutely doing nothing.
While it is possible to make that code correct without mutices, that absolutely needs atomic operations with appropriate ordering constraints (release-semantics on the assignment to last and acquire on the load of last inside the if statement).
Of course your solution does lower efficiency due to effectivly serializing the whole execution. However since the runtime is almost completely spent inside the streamout operation, which is almost certainly internally synchronized by the use of locks, your solution doesn't lower the efficiency anymore then it already is. Waiting on the lock in your code might actually be faster then busy waiting for it, depending on the availible resources (the nonlocking version using atomics would absolutely tank when executed on a single core machine)
I am new to Boost threading and I am stuck with how output is performed from multiple threads.
I have a simple boost::thread counting down from 9 to 1; the main thread waits and then prints "LiftOff..!!"
#include <iostream>
#include <boost/thread.hpp>
using namespace std;
struct callable {
void operator() ();
};
void callable::operator() () {
int i = 10;
while(--i > 0) {
cout << "#" << i << ", ";
boost::this_thread::yield();
}
cout.flush();
}
int main() {
callable x;
boost::thread myThread(x);
myThread.join();
cout << "LiftOff..!!" << endl;
return 0;
}
The problem is that I have to use an explicit "cout.flush()" statement in my thread to display the output. If I don't use flush(), I only get "LiftOff!!" as the output.
Could someone please advise why I need to use flush() explicitly?
This isn't specifically thread related as cout will buffer usually on a per thread basis and only output when the implementation decides to - so in the thread the output will only appear on a implementation specific basic - by calling flush you are forcing the buffers to be flushed.
This will vary across implementations - usually though it's after a certain amount of characters or when a new line is sent.
I've found that multiple threads writing too the same stream or file is mostly OK - providing that the output is performed as atomically as possible. It's not something that I'd recommend in a production environment though as it is too unpredictable.
This behaviour seems to depend on OS specific implementation of the cout stream. I guess that write operations on cout are buffered to some thread specific memory intermediatly in your case, and the flush() operation forces them being printed on the console. I guess this, since endl includes calling the flush() operation and the endl in your main function doesn't see your changes even after the thread has been joined.
BTW it would be a good idea to synchronize outputs to an ostream shared between threads anyway, otherwise you might see them intermigled. We do so for our logging classes which use a background thread to write the logging messages to the associated ostream.
Given the short length of your messages, there's no reason anything should appear without a flush. (Don't forget that std::endl is the equivalent of << '\n' << std::flush.)
I get the asked behaviour with and without flush (gcc 4.3.2 boost 1.47 Linux RH5)
I assume that your cygwin system chooses to implement several std::cout objects with associated std::streambuf. This I assume is implementation specific.
Since flush or endl only forces its buffer to flush onto its OS controlled output sequence the cout object of your thread remains buffered.
Sharing a reference of an ostream between the threads should solve the problem.