I'm practicing C++ through Visual C++ Express and tutorials on Youtube. I made a calculator by following a cool video, but I didn't want the program to end right after you do one equation, so I searched for what to do and tried while(true).
The problem is, instead of a number, when the program prompted me to type in a number, I typed in "blah", hit enter, and the program started to just go crazy so I exited real quick.
Without the while(true), if I type in "blah", then the program instantly goes to the "press any button to exit..." phrase. So what I'm wondering is, how dangerous is while(true)?
Also, I'm really sorry if this question is inappropriate-_-
The worst thing that will happen with while(true){} is that you will go into an infinite loop.
If you write something into those brackets other then an instruction that breaks the loop, that will be repeated over and over again.
This answers your question. If you want to know whats wrong with your calculator, create a new question with a specific problem.
while(true) can obviously create an "infinite" loop, which just means there's nothing internal to the process capable of causing the loop to terminate. It's still possible that the Operating System may decide (e.g. after a CPU usage quota is exhausted) or be asked (e.g. kill -HUP processid on UNIX/Linux) to terminate the process.
It's worth considering the difference between infinite loops containing:
blocking (I/O?) requests, such as waits for new network client connections or data or keyboard input, or even just for a specific interval to elapse, then a finite amount of periodic or I/O related processing, and
non-blocking branching and data-processing instructions that just spin around burning CPU and/or thrashing CPU/memory caches (e.g. x += array[random(array.size())];)
The former case may actually be deliberately used in some cases, such as a server process that doesn't need to do any orderly shutdown and can therefore let the OS shut it down without running any post-loop code, but the second case is usually a programming error: you tend to only have such a while(true) when there's some way for an exit condition or error occuring during processing controlled by the loop to interrupt the loop. For example, there may be an if (condition) break;, or something that will throw an exception when there's an error. This may not be obvious - if could for example be an exception from some function that's invoked - even from having set the Standard iostream's functions to throw on conversion failure or EOF - rather than a visible throw statement in the source code within the loop.
It's worth noting that many other things can create infinite loops too - for example while (a < b) could go forever if there's no conditions under which a could become >= b. So, it's a general aspect of programming that you have to consider how your loops exit, and not some problem with the safety of the language. For inputs, it's normal to do something like:
while (std::cin >> my_int)
{
// can use my_int...
}
// failed to convert next input from stdin to an int and/or hit EOF
If you write a console application, you intend it to be run from within a console.
Don't while, cin.get() or anything, just use your program as it's intended to be used.
Related
I’ve got a c++ program that takes as command line arguments paths to some files and then does some number crunching. There is no further user input, but the program might output some stuff on stdout.
I’m wondering now how to implement signal handling for this program. The program will end, no matter what signal is received, but the main signals will be SIGINT if the user interrupts the program or SIGALRM if some timeout is reached. However, I need to print some sort of summary to stdout before exiting the program.
As I’m new to c++ (coming from c#/java), I’m wondering how to do this most efficiently. I’ve read the documentation and if I understood it correctly, I’m not allowed to print anything to stdout within the signal handler. The standard way I’ve came across during research was to set a flag and check that flag constantly. However, I don’t quite like this approach, as this would introduce many additional conditional jumps.
I know, premature optimization is the root of all evil. And sure enough, I might end up implementing the flag approach anyway but since I’m new to c++, I’m wondering if there is a different approach that I’m unaware of. I thought about spawning a new thread that does the calculations and using sigwait in the main thread to wait for signals and if one is received, print the summary, and exit the process.
Is there some better way?
Thanks!
Alex
When making a program with Qt, we can have long recursive process,
If so, after a while, windows show the "Dont answer" message next to the window title.
This message could lead the user to think the program don't work which is not true.
How can I do to avoid this message in Qt?
In order to remain responsive to the system and user input, put a long running task into its own thread. You might also want to provide feedback to the user, like a progress bar, so he sees the program is still doing some job he requested.
See also Threading Basics for an introduction on using threads with Qt and Threading and Concurrent Programming Examples for some examples.
If your process takes a long time because of loops (or recursive functions), you can call QCoreApplication::processEvents() in your loop to ask your application to treat events.
If you have only one instruction that take a long time (such as copy a large file), you may use QThread or QtConcurrent.
While Olaf's answer is good, a simpler approach would be to sprinkle QCoreApplication::processEvents() in your code.
From the docs:
Processes all pending events for the calling thread according to the
specified flags until there are no more events to process. You can
call this function occasionally when your program is busy performing a
long operation (e.g. copying a file).
I'm not sure if anyone uses Borland c++ 3.1, but I have to do it.
I have a program which implements simple threads and changes context of those threads through timer interrupt.
I have an infinite loop and 2 threads that do their job and change between each other and main's thread. Their job is to produce some output, to write something on console.
Problem is that every time I run the program, different thing happens.
Sometimes it works for half a minute and it just stops writing what it should. Write just stops and there is no error and borland doesn't crash.
Sometimes it stops and borland crashes without message.
Sometimes it stops and borland crashes with message "illegal instruction"
Sometimes in the last line it writes before it stops are some weird characters that shouldn't be in output.
Is it the console that is "full" and borland acts weird?
What can be a problem?
If I remember correctly, is was not safe to write to the console (or use file I/O) under DOS when called from an interrupt. To do it properly, you must check something called "DOS re-entrancy flag" and only write to the console if it is zero (See http://cs.smith.edu/~thiebaut/ArtOfAssembly/CH18/CH18-3.html or search the web for more information)
In real and virtual 8086 modes programs aren't protected from each other. So, if your program screws something up, for example:
overwrites memory that does not belong to it (or to the appropriate thread in itself), including memory corruptions due to stack overflows in the program or its ISRs
fails to preserve (=save, then restore) CPU registers in any of its ISRs
changes hardware states to something unexpected to the rest of the system
alters timer frequency in obvious to the rest of the system ways
if it does any of that, it should be no surprise that something crashes or hangs or misbehaves in some other way.
I'm guessing that you're having issues 1 and/or 2 above. You can have a race condition there as well.
Unfortunately, without seeing any of your code we can't be of any more help. Think of it, it's like treating a new patient by phone.
In my code there are three concurrent routines. I try to give a brief overview of my code,
Routine 1 {
do something
*Send int to Routine 2
Send int to Routine 3
Print Something
Print Something*
do something
}
Routine 2 {
do something
*Send int to Routine 1
Send int to Routine 3
Print Something
Print Something*
do something
}
Routine 3 {
do something
*Send int to Routine 1
Send int to Routine 2
Print Something
Print Something*
do something
}
main {
routine1
routine2
routine3
}
I want that, while codes between two do something (codes between two star marks) is executing, flow of control must not go to other go routines. For example, when routine1 is executing the events between two stars (sending and printing events), routine 2 and 3 must be blocked (means flow of execution does not pass to routine 2 or 3 from routine 1).After completing last print event, flow of execution may pass to routine 2 or 3.Can anybody help me by specifying, how can I achieve this ? Is it possible to implement above specification by WaitGroup ? Can anybody show me by giving a simple example how to implement above specified example by using WaitGroup. Thanks.
NB:May be this is repeat question of this. I tried by using that sync-lock mechanism, however, may be because I have a large code that's why I could not put lock-unlock properly, and it's creating deadlock situation (or may be my method is error producing). Can anybody help me by a simple procedure thus I can achieve this. I give a simple example of my code here where Here I want to put two prints and sending event inside mutex (for routine 1) thus routine 2 can't interrupt it. Can you help me how is it possible. One possible solution given,
http://play.golang.org/p/-uoQSqBJKS which gives error.
Why do you want to do this?
The deadlock problem is, if you don't allow other goroutines to be scheduled, then your channel sends can't proceed, unless there's buffering. Go's channels have finite buffering, so you end up with a race condition on draining before they get sent on while full. You could introduce infinite buffering, or put each send in its own goroutine, but it again comes down to: why are you trying to do this; what are you trying to achieve?
Another thing: if you only want to ensure mutual exclusion of the three sets of code between *s, then yes, you can use mutexes. If you want to ensure that no code interrupts your block, regardless of where it was suspended, then you might need to use runtime.LockOSThread and runtime.UnlockOSThread. These are fairly low level and you need to know what you're doing, and they're rarely needed. Of you want there to be no other goroutines running, you'll have to have runtime.GOMAXPROCS(1), which is currently the default.
The problem in answering your question is that it seems no one understands what your problem really is. I see you're asking repeatedly about roughly the same, though no progress has been done. There's no offense in saying this. It's an attempt to help you by a suggestion to reformulate your problem in a way comprehensible to others. As a possible nice side effect, some problems do solve themselves while being explained to others in an understandable way. I've experienced that many times by myself.
Another hint could be in the suspicious mix of explicit syncing and channel communication. That doesn't mean the design is necessarily broken. It just doesn't happen in a typical/simple case. Once again, your problem might be atypical/non trivial.
Perhaps it's somehow possible to redesign your problem using only channels. Actually I believe that every problem involving explicit synchronization (in Go) could be coded while using only channels. That said, it is true some problems are written with explicit synchronization very easily. Also channel communication, as cheap as it is, is not as cheap as most synchronization primitives. But that could be looked after later, when the code works. If the "pattern" for some say sync.Mutex will visibly emerge in the code, it should be possible to switch to it and much more easy to do that when the code already works and hopefully has tests to watch your steps while making the adjustments.
Try to think about your goroutines like independently acting agents which:
Exclusively own the data received from the channel. The language will
not enforce this, you must deploy own's discipline.
Don't anymore touch the data they've sent to a channel. It follows from first rule, but important enough to be explicit.
Interact with other agents (goroutines) by data types, which encapsulate a whole unit of workflow/computation. This eliminates e.g. your earlier struggle with geting the right number of channel messages before the "unit" is complete.
For every channel they use, it must be absolutely clear in before if the channel must be unbuffered, must be buffered for fixed number of items or if it may be unbound.
Don't have to think (know) about what other agents are doing above getting a message from them if that is needed for the agent to do its own task - part of the bigger picture.
Using even such few rules of thumb should hopefully produce code which is more easy to reason about and which usually doesn't requires any other synchronization. (I'm intentionally ignoring performance issues of mission critical applications now.)
Just curious. How does actually the function Sleep() work (declared in windows.h)? Maybe not just that implementation, but anyone. With that I mean - how is it implemented? How can it make the code "stop" for a specific time? Also curious about how cin >> and those actually work. What do they do exactly?
The only way I know how to "block" something from continuing to run is with a while loop, but considering that that takes a huge amount of processing power in comparison to what's happening when you're invoking methods to read from stdin (just compare a while (true) to a read from stdin), I'm guessing that isn't what they do.
The OS uses a mechanism called a scheduler to keep all of the threads or processes it's managing behaving nicely together.
several times per second, the computer's hardware clock interrupts the CPU, which causes the OS's scheduler to become activated. The scheduler will then look at all the processes that are trying to run and decides which one gets to run for the next time slice.
The different things it uses to decide depend on each processes state, and how much time it has had before. So if the current process has been using the CPU heavily, preventing other processes from making progress, it will make the current process wait and swaps in another process so that it can do some work.
More often, though, most processes are going to be in a wait state. For instance, if a process is waiting for input from the console, the OS can look at the processes information and see which io ports its waiting for. It can check those ports to see if they have any data for the process to work on. If they do, it can start the process up again, but if there is no data, then that process gets skipped over for the current timeslice.
as for sleep(), any process can notify the OS that it would like to wait for a while. The scheduler will then be activated even before a hardware interrupt (which is also what happens when a process tries to do a blocking read from a stream that has no data ready to be read,) and the OS makes a note of what the process is waiting for. For a sleep, the process is waiting for an alarm to go off, or it may just yield again each time it's restarted until the timer is up.
Since the OS only resumes processes after something causes it to preempt a running process, such as the process yielding or the hardware timer interrupt i mentioned, sleep() is not very accurate, how accurate depends on the OS or hardware, but it's usually on the order of one or more milliseconds.
If more accuracy is needed, or very short waits, the only option is to use the busy loop construct you mentioned.
The operating system schedules how processes run (which processes are eligible to run, in what order, ...).
Sleep() probably issues a system call which tells the kernel “don't let me use the processor for x milliseconds”.
In short, Sleep() tells the OS to ignore the process/thread for a while.
'cin' uses a ton of overloaded operators. The '>>', which is usually right bit-shift, is overloaded for pretty much every type of right-hand operand in C++. A separate function is provided for each one, which reads from the console and converts the input into whichever variable type you have given. For example:
std::cin::operator>> (int &rhs);
That's not real C++ — I haven't worked with streams and overloading in a while, so I don't remember the return type or the exact order of arguments. Nevertheless, this function is called when you run cin >> an integer variable.
The exact underlying implementation depends on the operating system.
The answer depends on the operating system, but generally speaking, the operating system either schedules some other code to run elsewhere in another thread, or if it literally has nothing to do, it gets the CPU to wait until a hardware event occurs, which causes the CPU to jump to some code called an interrupt handler, which can then decide what code to run.
If you are looking for a more controlled way of blocking a thread/process in a multi-threaded program, have a look at Semaphores, Mutexes, CriticalSections and Events. These are all techniques used to block a process or thread (without loading the CPU via a while construct).
They essentially work off of a Wait/Signal idiom where the blocked thread is waiting and another process signals it to tell it to start again. These (at least in windows) can also have timeouts, thus providing a similar functionality to Sleep().
At a low level, the system has a routine called the "scheduler" that dispatches the instructions from all the running programs to the CPU(s), which actually run them. System calls like "Sleep" and "usleep" match to instructions that tell the scheduler to IGNORE that thread or process for a fixed amount of time.
As for C++ streams, the "cin" hides the actual file handle (stdin and stdout actually are such handles) you're accessing, and the ">>" operator for it hides the underlying calls to read and write. Since its an interface the implementation can be OS-specific, but conceptually it is still doing things like printf and scanf under the hood.