Looking at this OMPL optimization tutorial, there is the line:
ob::PlannerStatus solved = planner->solve(1.4/*seconds timeout*/);
With this PlannerStatus definition.
However, I am using the RRT# algorithm with a certain cost threshold, let's say 10.0 for the sake of argument. If I set it too low, the algorithm by design aborts after 1.4 second with the best value found so far, and prints a message:
Info: ... Final solution cost 17.071
Info: Solution found in 1.418528 seconds
And returns ob::PlannerStatus::EXACT_SOLUTION - I suppose I do have an exact, but perhaps not optimal solution.
If I run with a different set of data, I can see something like:
Info: ... Final solution cost 9.543
Info: Solution found in 0.003216 seconds
That also, however, returns ob::PlannerStatus::EXACT_SOLUTION.
So, how can I differentiate between a timeout solution and a threshold-matching solution?
EXACT_SOLUTION means that the planner has already found a valid path between start and goal configurations regardless of its cost. APPROXIMATE_SOLUTION is when planning time is finished and the planner could not find any solution so that it returns a path that is the nearest to the goal configuration.
For your problem, there are two solutions:
The first solution is to check the returned path cost and planning time. If both parameters are lower than the values you set; therefore, it is the solution you're looking for.
The second one is to change the planner code to return different solutions status which can be edited via StatusType enums.
Once you change the planner code you need to go to the build directory of OMPL (..build/Release) and run "make install" in the terminal.
You could add your enums to ompl::base::PlannerStatus
/// The possible values of the status returned by a planner
enum StatusType
{
/// Uninitialized status
UNKNOWN = 0,
/// Invalid start state or no start state specified
INVALID_START,
/// Invalid goal state
INVALID_GOAL,
/// The goal is of a type that a planner does not recognize
UNRECOGNIZED_GOAL_TYPE,
/// The planner failed to find a solution
TIMEOUT,
/// The planner found an approximate solution
APPROXIMATE_SOLUTION,
/// The planner found an exact solution
EXACT_SOLUTION,
/// The planner crashed
CRASH,
/// The planner did not find a solution for some other reason
ABORT,
/// The number of possible status values
TYPE_COUNT
};
Related
I have a working column generation algorithm in SCIP. Due to specific constraints that I include while generating columns, it might happen that the last pricing round determines that the root node is infeasible (by the Farkas pricer of course).
In case that happens, I would like to 1) relax those specific constraints, 2) resolve the LP, and 3) start pricing columns again.
So, I have created my own EventHandler class, catching the node infeasibility event:
SCIP_DECL_EVENTINITSOL(EventHandler::scip_initsol)
{
SCIP_CALL( SCIPcatchEvent(scip_, SCIP_EVENTTYPE_NODEINFEASIBLE, eventhdlr, NULL, NULL));
return SCIP_OKAY;
}
And, corresponding, the scip_exec virtual method:
SCIP_DECL_EVENTEXEC(EventHandler::scip_exec)
{
double cur_rhs = SCIPgetRhsLinear(scip_, *d_varConsInfo).c_primal_obj_cut);
SCIPchgRhsLinear (scip_, (*d_varConsInfo).c_primal_obj_cut, cur_rhs + DELTA);
return SCIP_OKAY;
}
Where (*d_varConsInfo).c_primal_obj_cut is the specific constraint to be changed, DELTA is a global parameter, and cur_rhs is the current right hand side of the specific constraint. This function is neately called after the node infeasibility proof, however, I do not know how to 'tell' scip that the LP should be resolved and possible new columns should be included. Can somebody help me out with this?
When the event handler catches the NODEINFEASIBLE event, it is already too late to change something about the infeasibility of the problem, the node processing is already finished. Additionally, you are not allowed to change the rhs of a constraint during the solving process (because this means that reductions done before would potentially be invalid).
I would suggest the following: if your Farkas pricing is not able to identify new columns to render the LP feasible again, the node will be declared to be infeasible in the following. Therefore, at the end of Farkas pricing (if you are at the root node), you could just price an auxiliary variable that you add to the constraint that you want to relax, with bounds corresponding to your DELTA. Note that you need to have marked the constraint to be modifiable when creating it. Then, since a variable was added, SCIP will trigger another pricing round.
maybe you should take a look into the PRICERFARKAS method of SCIP (https://scip.zib.de/doc/html/PRICER.php#PRICER_FUNDAMENTALCALLBACKS).
If the current LP relaxation is infeasible, it is the task of the
pricer to generate additional variables that can potentially render
the LP feasible again. In standard branch-and-price, these are
variables with positive Farkas values, and the PRICERFARKAS method
should identify those variables.
I have an IP camera that receives commands using POST HTTP requests(for example to call PTZ commands or set various camera settings). The standard way of controlling it is through it's own web interface which is partially an ActiveX plugin and partially standard html+js. Of course because of the ActiveX part it only works in IE under Windows.
I'm attempting to change that by figuring out all the commands and writing a small python or javascript code to do the same, so that it is more cross platform.
I have one major problem. Each POST request contains a calculated "cc" field which I assume is a checksum. The JS code in the cam interface points out that it is calculated by calling a function inside the plugin:
tt = new Date().Format("yyyyMMddhhmmss");
jo_header["tt"] = tt;
if (getCpPlugin() != null && getCpPlugin().valid) {
jo_header["cc"] = getCpPlugin().nsstpGetCC(tt, session_id);
}
nsstpGetCC function obviously calculates the checksum from two parameters the timestamp and session_id. Real example(captured with Wireshark):
tt = "20171018231918"
session_id = "30303532646561302D623434612D3131"
cc = "849e586524385e1071caa4023a3df75401e5bb82"
Checksum seems to be 160bit. I tried both sha-1 and ripemd-160 and all combinations of concatenating tt and session_id I could think of. But I can't seem to get the same hash as the one the original plugin gets. The plugin dll seems to be written in c++. And I have almost no experience with decompilation to dive into this problem from that angle.
So my question basically is can someone figure out how they calculated that cc, or at least give me an idea in which direction to research further. Maybe I'm looking at wrong hash algorithms or something... Or give me some idea how I could somehow figure out what the original ActiveX function nsstpGetCC is doing for example by decompilation or maybe by monitoring it's operation in memory while running. What tools should I use?
I want to know how to create persistent nodes in ZooKeeper, using C++ client. I know from documentation, that there is a method zoo_acreate. And documentation says about this method that:
This method will create a node in ZooKeeper. A node can only be created if it does not already exists. The Create Flags affect the creation of nodes. If ZOO_EPHEMERAL flag is set, the node will automatically get removed if the client session goes away. If the ZOO_SEQUENCE flag is set, a unique monotonically increasing sequence number is appended to the path name.
But, unfortunatelly, almost as always with C++ libraries, this library completely lacks reasonable teeny-weeny examples demonstarting the usage of the library methods. As for example in this case where documentation page is about zoo_acreate method, but some terribly looking example is totally about something else (it does not even mention zoo_acreate method).
So, my question is how to set these flags ZOO_EPHEMERAL and ZOO_SEQUENCE. It would be great to see this in the context of some tiny examples. Thanks!
Googling for "zoo_acreate ZOO_EPHEMERAL" gave this as the seventh result:
string path = "/nodes/";
string value = "data";
int rc = zoo_acreate(zh, path.c_str(), value.c_str(), value.length(),
&ZOO_OPEN_ACL_UNSAFE, ZOO_EPHEMERAL | ZOO_SEQUENCE, &czoo_created, &where);
Source: https://issues.apache.org/jira/browse/ZOOKEEPER-348
Why is this test causing the following failure and error?
expected 'NO. ONE' to equal 'ITEM TWO'
<unknown> at /swiper-slider/test/basic-test.html:59
Object.Fake.downAt at /polymer-gestures/test/js/fake.js:98
Object.Fake.downOnNode at /polymer-gestures/test/js/fake.js:89
Context.<anonymous> at /swiper-slider/test/basic-test.html:56
polymer-gestures/test/js/fake.js is failing to find the target in this method, called from this method but I can't narrow down the exact culprit.
My hunch is that it has something to do with the div.swiper-button-next element being appended as a child on the fly and use of document.querySelector('swiper-slider /deep/ div.swiper-button-next') in the test.
I have a hunch that one of a few things is happening.
the div.swiper-button-next isn't in the DOM by the time you make the call. Either work a callback into your system to fire when everything is done (then check the value inside that callback), or (just to test if this is actually the problem) put a manual setTimeout to delay the query selector and assertion for a bit.
Polymer's targetAt function uses elementFromPoint() internally. Double check to make sure you don't have any overlays (core-overlay tends to crap all over my window sometimes...) and that the element you really want to tap is actually the element being found. Don't be afraid to put some debugging/console.log statements into the actual polymer source code to see what it is finding there.
I haven't spent too much time looking over the test, but your slideEls[1] could possibly have changed since you queried for it. querySelectorAll returns a "non-live" nodelist, so changes to the DOM don't update your selection.
I find the documentation on Eunit lacking, with regards to how to test a multi-node application. I found this example, but sadly when I run:
cluster_test_() ->
{node, foo,
fun (Node) ->
[?_assertEqual(pong, net_adm:ping(Node))]
end
}.
I get:
undefined
*** context setup failed ***
** in function slave:start/5 (slave.erl, line 197)
**exit:not_alive
Am I doing something wrong here?
As a sidenote, I also looked at gproc's distributed test here, but it's manually starting a number of slave nodes rather than using the built-in Eunit functionality.
Can someone give me some examples of how to use use the node test fixture?
Thanks,
Common Test was written especially for testing bigger systems.
Other that official documentation you can find very good introduction to theme here. And chapter evens ends with small snippet howto integrate existing eunit tests into Common Test.
Hm, I never got the slave node functionality to work properly, so it shouldn't be a documented feature. I guess it ended up in the docs while I still thought it was working. I'll probably have to fix the docs.
If you are going with multi-node tests and eunit keep in mind then eunit ifdefs in modules will change it's checksums, and if you say have one module compiled with ifdef eunit and another is not you will have errors if try to call remote functions.
My suggestion is that you run your master node with disabled distribution. Enable it with -sname key (I assume, your example code is located in module node_test):
> erl -sname master
(master#hostname)1> c(node_test).
> node_test:test().
But it is not all. To run this code in new versions of erlang, you should make a little changes:
cluster_test_() ->
{node, foo,
fun ({Node, StopNet}) ->
?debugFmt("Node ~p", [Node]),
?debugFmt("StopNet ~p", [StopNet]),
[?_assertEqual(pong, net_adm:ping(Node))]
end
}.
Note, function argument now contains not node name, but tuple with two elements. First element is remote node name, second - is boolean flag which always false (at least for now). For more detail refer to eunit sources