I want to find the symmetry group of an integer linear program. I think there is a function in skip called SCIPgetGeneratorsSymmetry . I how I can use this function?
You are right, to access symmetry information in SCIP, you have to call the function SCIPgetGeneratorsSymmetry() via C/C++. Note that you need to link SCIP against the external software bliss, because otherwise, SCIP is not able to compute symmetries of your (mixed-integer) linear program.
If you set up your (mixed-integer) linear program using a C/C++ project, you have several options for computing symmetries.
If you set the "recompute" parameter to FALSE, SCIP will return the currently available symmetry information - if symmetries have not been computed yet, SCIP will compute symmetries to give you access to this information.
If you set "recompute" to TRUE, SCIP will discard the available symmetry information and you get access to the generators of the current symmetry group. Moreover, you can control the kind of symmetries that are computed via the parameters "symspecrequire" and "symspecrequirefixed", e.g., to only compute symmetries of binary variables that fix continuous variables.
Edit:
If you have no experience with coding in C/C++ and you are only interested in printing the generators of the symmetry group, the easiest way is probably to modify SCIP's source code in presol_symmetry.c as follows:
Add two integer paramaters int i and int p at the very beginning of determineSymmetry().
Search within determineSymmetry() for the line in which computeSymmetryGroup() is called.
Add the following code snippet right after this function call:
for (p = 0; p < presoldata->nperms; ++p)
{
printf("permutation %d\n", p);
for (i = 0; i < presoldata->npermvars; ++i)
{
if ( TRUE )
printf("%d ", presoldata->perms[p][i]);
else
printf("%s ", SCIPvarGetName(presoldata->permvars[presoldata->perms[p][i]]));
}
printf("\n");
}
This code prints the generators of the symmetry group as a list of variable indices, e.g., 1 2 0 is the permutation that maps 0 -> 1, 1 -> 2, and 2 -> 0. If you change TRUE to FALSE, you get the same list but variable indices are replaced by their names.
Do not forget to recompile SCIP.
If you solve an instance with SCIP and symmetry handling is enabled, SCIP will print the generators in the above format whenever it computes the symmetry group. If you are interested in the symmetry group of the original problem, you should use the parameter setting presolving/symbreak/addconsstiming = 0 and propagating/orbitalfixing/symcomptiming = 0. If you are fine with symmetries of the presolved problem, change the zeros to ones.
Related
I have been trying to write a code in C++ for embedded Runge-Kutta methods (explicit and Rosenbrock for the moment). The idea is to keep the code simple and general so that one can pass their Butcher tableau (of any order) and just run it.
I have verified that the code works in general, but there are cases (when I have a very complicated system of 4 differential equations) where the stepsize control fails to adapt (I get constant stepsize or just wrong in general).
The stepsize control I use (I found it in this paper is:
//beta is some safety parameter
fac=beta;
if(Delta<=1) { //if current step is accepeted
if(delta_rej<=1){fac*=h/h_old; } //if previous step was rejected
fac*=std::pow(Delta, -0.65/( (LD) method.p + 1.) );
fac*=std::pow( delta_acc/Delta, 0.3/ ( (LD) method.p + 1. ) );
h_stop=true ; //this is used exit a loop inside which the stepsize control is called
}else{ //if current step is rejected
fac*=std::pow( Delta , -1./((LD) method.p +1. ) );
}
//don't allow h to increase or decrease very much
if(fac> fac_max){fac = fac_max;}
if(fac< fac_min){fac = fac_min;}
//update h
h=h*fac;
Here, h_old is the previously accepted stepsize, the step size of the current trial step is h.
Also, Delta [1] is the relative (local) error estimate for the current try (which the controller tries to make ~1), delta_rej is the Delta of the previous try, delta_acc is the Delta for the previous accepted step, and method.p is the order of the method (LD is a macro that can be double or long double).
I have tried using the simple version of this (i.e. just fac*=std:: pow( Delta, -1./((LD) method.p +1. ) );), but it seems that the previous one is a bit more stable.
For example, these are histograms I got that show the number of steps taken by my code vs scipy:
Explicit RKF,
Rosenbrock. As you can see, they are close, and the difference can be caused by the difference in the details of the implementation.
Having said that, I am still not sure, and what I really would like to know is whether I am using the controller correctly.
Thanks
[1]: This is the definition of Delta
In R the cumulative distribution function for the binomial distribution is called via an underlying C/C++ function called C_pbinom. I am trying to find the underlying code for this algorithm, so that I can find out what algorithm this function uses to compute the cumulative distribution function. Unfortunately, I have not been successful in finding the underlying code, nor any information on the algorithm that is used.
My question: How do I find the underlying C/C++ code for the function C_pbinom. Alternatively, is there any information source available showing the algorithm used by this function?
What I have done so far: Calling pbinom in R gives the following details:
function (q, size, prob, lower.tail = TRUE, log.p = FALSE)
.Call(C_pbinom, q, size, prob, lower.tail, log.p)
<bytecode: 0x000000000948c5a0>
<environment: namespace:stats>
I have located and opened the underlying NAMESPACE file in the stats library. This file lists various functions, including the pbinom function, but does not give code for the C_pbinom function, nor any pointer to where it can be found. I have also read a related answer on finding source code in R, and an article here on "compiling source codes", but neither has been of sufficient assistance to let me find the code. At this point I have his a dead end.
I went to the Github mirror for the R source code, searched for pbinom, and filtered to C: that got me here. The meat of the function is simply
pbeta(p, x + 1, n - x, !lower_tail, log_p)
This is invoking the incomplete beta function (= CDF of the Beta distribution): it means you need to in turn look up the pbeta function in the code: here, it says that the code is "a wrapper for TOMS708" , which is in src/nmath/toms708.c and described in a little more detail here (google "TOMS 708") ... original code here.
The full reference is here: Didonato and Morris, Jr.,
ACM Transactions on Mathematical Software (TOMS), Volume 18 Issue 3, Sept. 1992, Pages 360-373.
I am new to Python, coming from MATLAB, and long ago from C. I have written a script in MATLAB which simulates sediment transport in rivers as a Markov Process. The code randomly places circles of a random diameter within a rectangular area of a specified dimension. The circles are non-uniform is size, drawn randomly from a specified range of sizes. I do not know how many times I will step through the circle placement operation so I use a while loop to complete the process. In an attempt to be more community oriented, I am translating the MATLAB script to Python. I used the online tool OMPC to get started, and have been working through it manually from the auto-translated version (was not that helpful, which is not surprising). To debug the code as I go, I use the
MATLAB generated results to generally compare and contrast against results in Python. It seems clear to me that I have declared variables in a way that introduces problems as calculations proceed in the script. Here are two examples of consistent problems between different instances of code execution. First, the code generated what I think are arrays within arrays because the script is returning results which look like:
array([[ True]
[False]], dtype=bool)
This result was generated for the following code snippet at the overlap_logix operation:
CenterCoord_Array = np.asarray(CenterCoordinates)
Diameter_Array = np.asarray(Diameter)
dist_check = ((CenterCoord_Array[:,0] - x_Center) ** 2 + (CenterCoord_Array[:,1] - y_Center) ** 2) ** 0.5
radius_check = (Diameter_Array / 2) + radius
radius_check_update = np.reshape(radius_check,(len(radius_check),1))
radius_overlap = (radius_check_update >= dist_check)
# Now actually check the overalp condition.
if np.sum([radius_overlap]) == 0:
# The new circle does not overlap so proceed.
newCircle_Found = 1
debug_value = 2
elif np.sum([radius_overlap]) == 1:
# The new circle overlaps with one other circle
overlap = np.arange(0,len(radius_overlap), dtype=int)
overlap_update = np.reshape(overlap,(len(overlap),1))
overlap_logix = (radius_overlap == 1)
idx_true = overlap_update[overlap_logix]
radius = dist_check(idx_true,1) - (Diameter(idx_true,1) / 2)
A similar result for the same run was produced for variables:
radius_check_update
radius_overlap
overlap_update
Here is the same code snippet for the working MATLAB version (as requested):
distcheck = ((Circles.CenterCoordinates(1,:)-x_Center).^2 + (Circles.CenterCoordinates(2,:)-y_Center).^2).^0.5;
radius_check = (Circles.Diameter ./ 2) + radius;
radius_overlap = (radius_check >= distcheck);
% Now actually check the overalp condition.
if sum(radius_overlap) == 0
% The new circle does not overlap so proceed.
newCircle_Found = 1;
debug_value = 2;
elseif sum(radius_overlap) == 1
% The new circle overlaps with one other circle
temp = 1:size(radius_overlap,2);
idx_true = temp(radius_overlap == 1);
radius = distcheck(1,idx_true) - (Circles.Diameter(1,idx_true)/2);
In the Python version I have created arrays from lists to more easily operate on the contents (the first two lines of the code snippet). The array within array result and creating arrays to access data suggests to me that I have incorrectly declared variable types, but I am not sure. Furthermore, some variables have a size, for example, (2L,) (the numerical dimension will change as circles are placed) where there is no second dimension. This produces obvious problems when I try to use the array in an operation with another array with a size (2L,1L). Because of these problems I started reshaping arrays, and then I stopped because I decided these were hacks because I had declared one, or more than one variable incorrectly. Second, for the same run I encountered the following error:
TypeError: 'numpy.ndarray' object is not callable
for the operation:
radius = dist_check(idx_true,1) - (Diameter(idx_true,1) / 2)
which occurs at the bottom of the above code snippet. I have posted the entire script at the following link because it is probably more useful to execute the script for oneself:
https://github.com/smchartrand/MarkovProcess_Bedload
I have set-up the code to run with some initial parameter values so decisions do not need to be made; these parameter values produce the expected results in the MATLAB-based script, which look something like this when plotted:
So, I seem to specifically be having issues with operations on lines 151-165, depending on the test value np.sum([radius_overlap]) and I think it is because I incorrectly declared variable types, but I am really not sure. I can say with confidence that the Python version and the MATLAB version are consistent in output through the first step of the while loop, and code line 127 which is entering the second step of the while loop. Below this point in the code the above documented issues eventually cause the script to crash. Sometimes the script executes to 15% complete, and sometimes it does not make it to 5% - this is due to the random nature of circle placement. I am preparing the code in the Spyder (Python 2.7) IDE and will share the working code publicly as a part of my research. I would greatly appreciate any help that can be offered to identify my mistakes and misapplications of python coding practice.
I believe I have answered my own question, and maybe it will be of use for someone down the road. The main sources of instruction for me can be found at the following three web pages:
Stackoverflow Question 176011
SciPy FAQ
SciPy NumPy for Matlab users
The third web page was very helpful for me coming from MATLAB. Here is the modified and working python code snippet which relates to the original snippet provided above:
dist_check = ((CenterCoordinates[0,:] - x_Center) ** 2 + (CenterCoordinates[1,:] - y_Center) ** 2) ** 0.5
radius_check = (Diameter / 2) + radius
radius_overlap = (radius_check >= dist_check)
# Now actually check the overalp condition.
if np.sum([radius_overlap]) == 0:
# The new circle does not overlap so proceed.
newCircle_Found = 1
debug_value = 2
elif np.sum([radius_overlap]) == 1:
# The new circle overlaps with one other circle
overlap = np.arange(0,len(radius_overlap[0]), dtype=int).reshape(1, len(radius_overlap[0]))
overlap_logix = (radius_overlap == 1)
idx_true = overlap[overlap_logix]
radius = dist_check[idx_true] - (Diameter[0,idx_true] / 2)
In the end it was clear to me that it was more straightforward for this example to use numpy arrays vs. lists to store results for each iteration of filling the rectangular area. For the corrected code snippet this means I initialized the variables:
CenterCoordinates, and
Diameter
as numpy arrays whereas I initialized them as lists in the posted question. This made a few mathematical operations more straightforward. I was also incorrectly indexing into variables with parentheses () as opposed to the correct method using brackets []. Here is an example of a correction I made which helped the code execute as envisioned:
Incorrect: radius = dist_check(idx_true,1) - (Diameter(idx_true,1) / 2)
Correct: radius = dist_check[idx_true] - (Diameter[0,idx_true] / 2)
This example also shows that I had issues with array dimensions which I corrected variable by variable. I am still not sure if my working code is the most pythonic or most efficient way to fill a rectangular area in a random fashion, but I have tested it about 100 times with success. The revised and working code can be downloaded here:
Working Python Script to Randomly Fill Rectangular Area with Circles
Here is an image of a final results for a successful run of the working code:
The main lessons for me were (1) numpy arrays are more efficient for repetitive numerical calculations, and (2) dimensionality of arrays which I created were not always what I expected them to be and care must be practiced when establishing arrays. Thanks to those who looked at my question and asked for clarification.
I have a simple FMU file which contains a sine block that takes u as input and outputs y. In this case, u is set to equal to time. In my C++ code I have loaded the FMI library from FMILibrary and had done all the necessary steps up to a point where I want to give my input u a new value of pi(as 3.14). So I went:
fmistatus = fmi2_import_set_real(fmu, &uRef, 1, &pi);
while (timeCurrent < timeEnd){
fmistatus = fmi2_import_do_step(fmu, timeCurrent , stepSize, fmi2_true);
timeCurrent += stepSize;
}
u was still set to time even though I tried to give it a new value. Did I miss something?
PS. Is there anywhere I can find a more detailed description on the FMI library functions? Currently I can only find input output descriptions or did I miss something again.
UPDATE: After a few trials, I think this issue might be because I was trying to redefine my equation u = time. In other words when I change my u variable into RealInput block in openmodelica everything goes fine. So what if I really wants to redefine a certain equation? what do I have to do?
You shall not be able to set any variable in FMI - and especially not a variable with a binding equation - and I assume your Modelica model has "u=time;". Instead of having "u=time" you need to add a top-level input without any equation (so that the exported FMI has it as an input) - and then connect that to the sine-block.
Details:
For a co-simulation FMI the restriction on what you can set are in the state-diagram in section 4.2.4 of FMI2 specification.
Between fmi2DoStep you can only set Real variables that have causality="input" or causality="parameter" and variability="tunable" - and an input with an equation doesn't qualify.
Before starting the integration you could set it for other variables as well, but that is only guess-values for the initialization - and should not over-write the "u=time" equation.
I'm trying to use the sum of TF1 to do a fit with a piecewise function on ROOT Cern. Here is the important part of the code.
TCanvas *cvi = new TCanvas("V(i)","V(i)",200,10,600,400);
TGraphErrors *gvi = new TGraphErrors(27,i,V,si,sV);
TF1 *vi1 = new TF1("vi1","[0]*log(x*[1]+1)");
gvi->Fit(vi1,"MR+");
TF1 *vi2 = new TF1("vi2","[0]*x+[1]",0.005,0.012);
gvi->Fit(vi2,"MR+");
TF1 *vitot = new TF1("vi1+vi2","vi1+vi2");
gvi->Fit(vitot,"MR+");
The fitting with vi1 and vi2 are fine, but, as you see, vi2 has a range, therefore the fit with vtot should be the function
Nevertheless the programm does not respect the range I gave for vi2 when it does the fit vitot. That is, it does the fit vitot as if I gave no range at all.
How can I force ROOT to fit vtot taking into account the range I gave for vi2?
I cannot set a range directly on vitot because it would fit only that part of data, while I'm trying to fit all the data with different functions.
I already gave the option "R" in the fits, as you see, but that does not seem enough.
Looks like you want ROOT to assume some "default" value for the vi2 function outside of the initially provided range 0 < x < 0.05. In your case, you want the "default" value to be 0 but, in general, I don't think ROOT should be assuming anything. Not sure if the ROOT developers used the same logic but it is unlikely that the sum of functions is implemented as you expect it. Nevertheless, it is straight forward to write a custom C-function for vitot that would respect any range you want. Just take a look at the section "A general C function with parameters" of the TF1 documentation at https://root.cern.ch/doc/master/classTF1.html