Can solver status always be inferred from the termination condition? - pyomo

I would assume that if the termination condition is "optimal", then the solver status must be "ok". The documentation located at https://pyomo.readthedocs.io/en/latest/working_models.html#accessing-solver-status seems to confirm this by stating that "the value 'optimal' indicates that the solver succeeded." But the example that follows that paragraph checks that the solver status is "ok" AND that the termination condition is "optimal".
So can the termination condition by itself not be trusted? Are there actually cases where the termination condition is "optimal" but the solver status is something like "error"? Does this depend on which solver is being used?
Adding to my confusion, the example that immediately follows that one checks only the termination condition. The example located at http://www.pyomo.org/blog/2015/1/8/accessing-solver is essentially the same as the first one and checks both solver status and termination condition.
Thank you in advance for any help.

I am not sure I understand all your questions, or can answer but I can give you an example that I detailed in Pyomo-IPOPT: solver falls into local minima, how to avoid that?
Using IPOPT as solver, the solver was falling into a local minima while still returning an optimal condition.
This answers
So can the termination condition by itself not be trusted?
Yes.
and
Does this depend on which solver is being used?
Probably, as my test case would not happen with a reliable global solver.
Are there actually cases where the termination condition is "optimal" but the solver status is something like "error"?
I have never seen this happen, seems like it would mean the logic of the termination messages is broken.

I am not sure I understand your question completely. Here, since SO isn't a site for "read me the docs" questions, I assume that your question is about the technical way of using the librairy, so I believe this will help answer your question without sounding like I repeat what is in the docs.
The way I see it, solver statuses are used in the given examples as a way of avoiding errors. Logically, solver has to have an "ok" status in order to return a termination condition. Even if your solver returns an "infeasible" status, it doesn't mean that the solver encountered an error, far from it! The status will still be "ok", because your solver worked flawlessly up to now, and is ready for another optimization. However, when the status is "error", it means that something much worst happened. Something broke inside the solver or with its interface to Pyomo and this is one way to find it. Chances are that your solver could not even find a termination condition. In this case, the user might want to avoid retrieving a termination condition when the solver could not even give one. This avoids having to raise exceptions and helps to handle such cases. This is a very good practice.
So, you should not rely solely on the solver status, and you should rely only on the termination condition to know that your problem is "optimal" or "maxTimeLimit" and so on. However, you are better testing if the solver can return a termination condition by using the solver status, in case something bad happens with your solver.

Related

Detecting infinite recursion in v8

I am using google's v8 javascript engine to have an embedded js interpreter in my project, which must be able to execute user-provided code, but I am wondering if it is possible to set something up in advance of calling any user code which ensures that if the code tries to recurse indefinitely (or even if it just executes for too long), that it can somehow be made to abort, throw an otherwise uncaught exception, and report the issue back to the caller.
Thank you all for responses so far... yes, I realized not long after I posted this that I was basically asking for some kind of solution to the halting problem, which I know is unsolvable, and is actually far more than what I really need.
What I'd need is either some mechanism for detecting when something running in the v8 environment is returning quickly enough, or else simply a mechanism to detect if recursion is happening at all... my use cases are such that the end user should not be utilizing any recursion anyways, and if I can possibly even detect that, then I could reject it at that point instead of blindly executing it. It would be allowed, however, for different threads, with different isolates to invoke the same functions at the same time, so I can't just use a static local variable to lock out another call to the same function.
A compiler [V8 is definitely a compiler in this context, even if it isn't "always" a compiler] can detect recursion, but if the code is clever enough (for example depending on variables that aren't known at compile time), it's not possible to detect whether it has infinite or finite recursion.
I would simply state that "execution over X seconds is disallowed", and if the execution takes more than that long, abort it. You can do this by having a "watchdog thread", that gets triggered when the code completes - and if the watchdog thread gets to run X seconds, kill the main thread and report back to user-code. No, I don't know EXACTLY how to write this code in conjunction with V8.

What's the worst that could happen with while(true)?

I'm practicing C++ through Visual C++ Express and tutorials on Youtube. I made a calculator by following a cool video, but I didn't want the program to end right after you do one equation, so I searched for what to do and tried while(true).
The problem is, instead of a number, when the program prompted me to type in a number, I typed in "blah", hit enter, and the program started to just go crazy so I exited real quick.
Without the while(true), if I type in "blah", then the program instantly goes to the "press any button to exit..." phrase. So what I'm wondering is, how dangerous is while(true)?
Also, I'm really sorry if this question is inappropriate-_-
The worst thing that will happen with while(true){} is that you will go into an infinite loop.
If you write something into those brackets other then an instruction that breaks the loop, that will be repeated over and over again.
This answers your question. If you want to know whats wrong with your calculator, create a new question with a specific problem.
while(true) can obviously create an "infinite" loop, which just means there's nothing internal to the process capable of causing the loop to terminate. It's still possible that the Operating System may decide (e.g. after a CPU usage quota is exhausted) or be asked (e.g. kill -HUP processid on UNIX/Linux) to terminate the process.
It's worth considering the difference between infinite loops containing:
blocking (I/O?) requests, such as waits for new network client connections or data or keyboard input, or even just for a specific interval to elapse, then a finite amount of periodic or I/O related processing, and
non-blocking branching and data-processing instructions that just spin around burning CPU and/or thrashing CPU/memory caches (e.g. x += array[random(array.size())];)
The former case may actually be deliberately used in some cases, such as a server process that doesn't need to do any orderly shutdown and can therefore let the OS shut it down without running any post-loop code, but the second case is usually a programming error: you tend to only have such a while(true) when there's some way for an exit condition or error occuring during processing controlled by the loop to interrupt the loop. For example, there may be an if (condition) break;, or something that will throw an exception when there's an error. This may not be obvious - if could for example be an exception from some function that's invoked - even from having set the Standard iostream's functions to throw on conversion failure or EOF - rather than a visible throw statement in the source code within the loop.
It's worth noting that many other things can create infinite loops too - for example while (a < b) could go forever if there's no conditions under which a could become >= b. So, it's a general aspect of programming that you have to consider how your loops exit, and not some problem with the safety of the language. For inputs, it's normal to do something like:
while (std::cin >> my_int)
{
// can use my_int...
}
// failed to convert next input from stdin to an int and/or hit EOF
If you write a console application, you intend it to be run from within a console.
Don't while, cin.get() or anything, just use your program as it's intended to be used.

Issue with Mutual Execution of Concurrent Go Routines

In my code there are three concurrent routines. I try to give a brief overview of my code,
Routine 1 {
do something
*Send int to Routine 2
Send int to Routine 3
Print Something
Print Something*
do something
}
Routine 2 {
do something
*Send int to Routine 1
Send int to Routine 3
Print Something
Print Something*
do something
}
Routine 3 {
do something
*Send int to Routine 1
Send int to Routine 2
Print Something
Print Something*
do something
}
main {
routine1
routine2
routine3
}
I want that, while codes between two do something (codes between two star marks) is executing, flow of control must not go to other go routines. For example, when routine1 is executing the events between two stars (sending and printing events), routine 2 and 3 must be blocked (means flow of execution does not pass to routine 2 or 3 from routine 1).After completing last print event, flow of execution may pass to routine 2 or 3.Can anybody help me by specifying, how can I achieve this ? Is it possible to implement above specification by WaitGroup ? Can anybody show me by giving a simple example how to implement above specified example by using WaitGroup. Thanks.
NB:May be this is repeat question of this. I tried by using that sync-lock mechanism, however, may be because I have a large code that's why I could not put lock-unlock properly, and it's creating deadlock situation (or may be my method is error producing). Can anybody help me by a simple procedure thus I can achieve this. I give a simple example of my code here where Here I want to put two prints and sending event inside mutex (for routine 1) thus routine 2 can't interrupt it. Can you help me how is it possible. One possible solution given,
http://play.golang.org/p/-uoQSqBJKS which gives error.
Why do you want to do this?
The deadlock problem is, if you don't allow other goroutines to be scheduled, then your channel sends can't proceed, unless there's buffering. Go's channels have finite buffering, so you end up with a race condition on draining before they get sent on while full. You could introduce infinite buffering, or put each send in its own goroutine, but it again comes down to: why are you trying to do this; what are you trying to achieve?
Another thing: if you only want to ensure mutual exclusion of the three sets of code between *s, then yes, you can use mutexes. If you want to ensure that no code interrupts your block, regardless of where it was suspended, then you might need to use runtime.LockOSThread and runtime.UnlockOSThread. These are fairly low level and you need to know what you're doing, and they're rarely needed. Of you want there to be no other goroutines running, you'll have to have runtime.GOMAXPROCS(1), which is currently the default.
The problem in answering your question is that it seems no one understands what your problem really is. I see you're asking repeatedly about roughly the same, though no progress has been done. There's no offense in saying this. It's an attempt to help you by a suggestion to reformulate your problem in a way comprehensible to others. As a possible nice side effect, some problems do solve themselves while being explained to others in an understandable way. I've experienced that many times by myself.
Another hint could be in the suspicious mix of explicit syncing and channel communication. That doesn't mean the design is necessarily broken. It just doesn't happen in a typical/simple case. Once again, your problem might be atypical/non trivial.
Perhaps it's somehow possible to redesign your problem using only channels. Actually I believe that every problem involving explicit synchronization (in Go) could be coded while using only channels. That said, it is true some problems are written with explicit synchronization very easily. Also channel communication, as cheap as it is, is not as cheap as most synchronization primitives. But that could be looked after later, when the code works. If the "pattern" for some say sync.Mutex will visibly emerge in the code, it should be possible to switch to it and much more easy to do that when the code already works and hopefully has tests to watch your steps while making the adjustments.
Try to think about your goroutines like independently acting agents which:
Exclusively own the data received from the channel. The language will
not enforce this, you must deploy own's discipline.
Don't anymore touch the data they've sent to a channel. It follows from first rule, but important enough to be explicit.
Interact with other agents (goroutines) by data types, which encapsulate a whole unit of workflow/computation. This eliminates e.g. your earlier struggle with geting the right number of channel messages before the "unit" is complete.
For every channel they use, it must be absolutely clear in before if the channel must be unbuffered, must be buffered for fixed number of items or if it may be unbound.
Don't have to think (know) about what other agents are doing above getting a message from them if that is needed for the agent to do its own task - part of the bigger picture.
Using even such few rules of thumb should hopefully produce code which is more easy to reason about and which usually doesn't requires any other synchronization. (I'm intentionally ignoring performance issues of mission critical applications now.)

Sensible strategy for unit testing expected and non-expected deadlock behavior

I'd like some ideas about how I should test some objects that can block, waiting for another participant. The specific unit to be tested is the channel between the participants, The the participants themselves are mock fixtures for the purposes of the tests.
It would be nice to validate that the participants do deadlock when they are expected to, but this is not terribly important to me, since what happens after the deadlock can reasonably be described as undefined.
More critical would be to verify that the defined interactions from the participants do not deadlock.
In either case, I'm not really sure what the optimal testing strategy should be. My current notion is to have the test runner fire off a thread for each participant, sleep for a while, then discover if the child threads have returned. In the case they have not returned in time, assume that they have deadlocked, and safely terminate the threads, and the test fails (or succeeds if the deadlock was expected).
This feels a bit probabalistic, since there could be all sorts of reasons (however unlikely) that a thread might take longer than expected to complete. Are there any other, good ways of approaching this problem?
EDIT: I'm sure a soundness in testing would be nice, but I don't think I need to have it. I'm thinking in terms of three levels of testing certainty.
"The actual behavior has proven to match the expected behavior" deadlock cannot occur
"The actual behavior matched the expected behavior" deadlock did not occur in N tests
"The actual behavior agrees with the expected behavior" N tests completed within expected deadline
The first of course is a valuable test to pass, but ShiDoiSi's answer speaks to the impracticality of that. The second one is significantly weaker than the first, but still hard; How can you establish that a network of processes has actually deadlocked? I'm not sure that's any easier to prove than the first (maybe a lot harder)
The last one is more like what I have in mind.
The only way to reliably test for deadlocks is to instrument the locking subsystem to detect and report them. The last time I had to do this, we built a debug version of it that recorded which threads held which locks and checked for potential deadlocks on every lock-obtain call. It can be a heavyweight operation in a system with a lot of locking going on, but we found it to be so valuable that we reorganized the subsystem so we could turn it on and off with a switch at runtime, even in production builds.
The academic community will probably tell you (in fact it IS telling you right now ;) that you should do a faithful abstraction into some so-called model checking-framework (CSP, pi-calculus). That would then simulate abstract executions (exhaustive search through all possible scheduler interleavings). Of course the trick is to make sure that the abstraction IS actually faithful. You are no longer checking the actual source of your program, but the source in some other language.
Otherwise, some heavy-handed approach like using Java Path Finder/Explorer (which does something very similar) for the particular language comes to mind.
Similar research prototypes exist for C, and Intel and other companies are also in this business with specialised tools.
You are looking at one of the hot topics in Computer Science research, and for non-trivial/real systems, neither exhaustive testing nor formal verification are easily applicable to real code.
A valuable approach could be to instrument your code so that it will actually detect a deadlock, and potentially try to recover. For detecting deadlocks, the FreeBSD kernel uses a set of C-macros that track lock usage and report potential violations through the witness(4) mechanism. But again, errors that only occur rarely, will only be rarely spotted.
(Disclaimer: I'm not involved in any of the commercially tools linked above---I just added them to give you a feeling for the difficulty of the problem you are facing.)
For testing if there is no deadlock, you could use the equivalent of NUnit's TimeoutAttribute, which aborts and fails a test if execution time exceeds an upper limit. You could come *up with a good timeout value e.g if the test doesn't complete within 30s - something is wrong.
I'm not sure (or I haven't come across a situation) about asserting that a deadlock has occurred. Deadlocks are usually undesirable. I'm stumped on how to write a unit test that fails unless the test blocks - unit tests are usually supposed to be fast and non-blocking.
Since you've already done enough abstraction to mock out the participants, why not take it further and abstract out your thread synchronization (mutex, semaphore, whatnot)?
When you think about what constitutes a deadlock, you could use a specialized, deadlock-aware thread synchronizer in your tests. By "deadlock-aware", I don't mean that it should detect deadlocks the brute-force way by using timeouts etc., but have awareness of the situations that lead to deadlocks by way of flags, counters etc. It could detect deadlocks, while optionally providing the expected thread synchronization functionality. What I'm basically saying is, use instrumented thread synchronization for your tests...
This is all too abstract and easier said than done. And I don't claim to have successfully done it. I might simply be being silly here. But perhaps if you could provide just one (incomplete) test, the problem can be attacked in more concrete terms.

Testing concurrent data structure

What are some methods for testing concurrent data structures to make sure the data structs behave correctly when accessed from multiple threads ?
All of the other answers have focused on actually testing the code by putting it through its paces and actually running it in one form or another or politely saying "don't do it yourself, use an existing library".
This is great and all, but IMO, the most important (practical tests are important too) test is to look at the code line by line and for every line of code ask "what happens if I get interrupted by another thread here?" Imagine another thread, running just about any of the other lines/functions during this interruption. Do things still stay consistent? When competing for resources, does the other thread[s] block or spin?
This is what we did in school when learning about concurrency and it is a surprisingly effective approach. Bottom line, I feel that taking the time to prove to yourself that things are consistent and work as expected in all states is the first technique you should use when dealing with this stuff.
Concurrent systems are probabilistic and errors are often difficult to replicate. Therefore you need to run various input/output cases, each tested over time (hours, days, etc) in order to detect possible errors.
Tests for concurrent data structure involves examining the container's state before and after expected events such as insert and delete.
Use a pre-existing, pre-tested library that meets your needs if possible.
Make sure that the code has appropriate self-consistency checks (preferably fast sanity checks), and run your code on as many different types of hardware as possible to help narrow down interesting timing problems.
Have multiple people peer review the code, preferably without a pre-explanation of how it's supposed to work. That way they have to grok the code which should help catch more bugs.
Set up a bunch of threads that do nothing but random operations on the data structures and check for consistency at some rate.
Start with the assumption that your calls to access/modify data are not thread safe and use locks to ensure only a single thread can access/modify any part of the data at a time. Only after you can prove to yourself that a specific type of access is safe outside of the lock by multiple threads at once should you move that code outside of the lock.
Assume worst case scenarios, e.g. that your code will stop right in the middle of some pointer manipulation or another critical point, and that another thread will encounter that data in mid-transition. If that would have a bad result, leave it within the lock.
I normally test these kinds of things by interjecting sleep() calls at appropriate places in the distributed threads/processes.
For instance, to test a lock, put sleep(2) in all your threads at the point of contention, and spawn two threads roughly 1 second apart. The first one should obtain the lock, and the second should have to wait for it.
Most race conditions can be tested by extending this method, but if your system has too many components it may be difficult or impossible to know every possible condition that needs to be tested.
Run your concurrent threads for one or a few days and look what happens. (Sounds strange, but finding out race conditions is such a complex topic that simply trying it is the best approach).