Find one solution of equation via sympy solve() - sympy

Is there a way to find one solution of a complicated equation via sympy. It seems finding all the solutions is too sophisticated.
I have tried to put quik = True but it does not improve the resolution.

If the equation is a polynomial you can request a particular RootOf solution; if it is a univariate expression you can get a numerical solution "near some intial guess" with nsolve.

Related

Linear programming feasibility: Non connexe solution ensemble

I would like to solve a feasibility problem subject to linear constraint. My constraint look like:
abs(x_i - x_j) < d_ij_1
abs(x_i - x_j - a) < d_ij_2
abs(x_i - x_j) > d_ij_3
etc...
I am adding a picture of an example for just 3 variables domain (I am fixing the first variable to 0). I know that the white region are valid solution, and for instance I can choose the red dot.
My issue is as I increase the number of unknown x_j, I cannot represent the problem anymore in a way that make it easy to find a solution. I was wondering how can I try to solve such a problem ? Would linear programming help, even though the solution space is not really connexe here ? For scale, I am looking at solving it for ~6-10 variables. Also, I posted here as I don't know what stack would be the most fitted for this kind of problem

Is MULES the right choice for modified transport equation? [OpenFOAM]

I want to solve the following modified transport equation for the calculation of sensitivities:
Within interFoam, I implemented the following code right after solving alphaEqn.H (along with initial fields dAlpha and dU, calculation of dPhi and other modifications):
{
word dAlphaScheme("div(phi,dAlpha)");
word dPhiScheme("div(dPhi,alpha)");
#include "alphaSuSp.H"
surfaceScalarField dAlphaPhi
(
fvc::flux(phi, dAlpha, dAlphaScheme)
+ fvc::flux(dPhi, alpha1, dPhiScheme)
);
MULES::explicitSolve
(
geometricOneField(),
dAlpha,
dAlphaPhi,
Sp,
Su
);
}
It works, but I'm not sure if MULES::explicitSolve is the right way to solve this equation. After a few time steps the solution smears heavily. What would you recommend?
cfd-online is probably your best bet as I mentioned in the comments above. It looks like you're trying to implement an additional scalar transport equation, although it's not clear what physics you're calculating when you say 'sensitivities'.
Take a look at the example here on adding additional scalar transport equations to OpenFOAM solvers: https://openfoamwiki.net/index.php/How_to_add_temperature_to_icoFoam
It is out of date as it is for OpenFOAMv1.7, but the concept is 99.999% the same:
Create the field
Add appropriate transport coefficients to dictionaries
Add appropriate schemes to fvSchemes
Define the transport equation
Use the .solve() function on the transport equation

What are hp.Discrete and hp.Realinterval? Can I include more values in hp.realinterval instead of just 2?

I am using Hyperparameter using HParams Dashboard in Tensorflow 2.0-beta0 as suggested here https://www.tensorflow.org/tensorboard/r2/hyperparameter_tuning_with_hparams
I am confused in step 1, I could not find any better explanation. My questions are related to following lines:
HP_NUM_UNITS = hp.HParam('num_units', hp.Discrete([16, 32]))
HP_DROPOUT = hp.HParam('dropout', hp.RealInterval(0.1, 0.2))
HP_OPTIMIZER = hp.HParam('optimizer', hp.Discrete(['adam', 'sgd']))
My question:
I want to try more dropout values instead of just two (0.1 and 0.2). If I write more values in it then it throws an error- 'maximum 2 arguments can be given'. I tried to look for documentation but could not find anything like from where these hp.Discrete and hp.RealInterval functions came.
Any help would be appreciated. Thank you!
Good question. They notebook tutorial lacks in many aspects. At any rate, here is how you do it at a certain resolution res
for dropout_rate in tf.linspace(
HP_DROPOUT.domain.min_value,
HP_DROPOUT.domain.max_value,
res,):
By looking at the implementation to me it really doesn't seem to be GridSearch but MonteCarlo/Random search (note: this is not 100% correct, please see my edit below)
So on every iteration a random float of that real interval is chosen
If you want GridSearch behavior just use "Discrete". That way you can even mix and match GridSearch with Random search, pretty cool!
Edit: 27th of July '22: (based on the comment of #dpoiesz)
Just to make it a little more clear, as it is sampled from the intervals, concrete values are returned. Therefore, those are added to the grid dimension and grid search is performed using those
RealInterval is a min, max tuple in which the hparam will pick a number up.
Here a link to the implementation for better understanding.
The thing is that as it is currently implemented it does not seems to have any difference in between the two except if you call the sample_uniform method.
Note that tf.linspace breaks the mentioned sample code when saving current value.
See https://github.com/tensorflow/tensorboard/issues/2348
In particular OscarVanL's comment about his quick&dirty workaround.

What exactly is Pairwise Matching and How it works?

I'm working on Multiple Image Stitching and I came around the term Pairwise Matching. I almost searched on every site but am unable to get CLEAR description on what it exactly is and how it works.
I'm working in Visual Studio 2012 with opencv. I have modified stitching_detailed.cpp according to my requirement and am very successful in maintaining the quality with significantly less time, except pairwise matching. I'm using ORB to find feature points. BestOf2NearestMatcher is used in stitching_detailed.cpp for pairwise matching.
What I know about Pairwise Matching and BestOf2NearestMatcher:
(Correct me if I'm wrong somewhere)
1) Pairwise Matching works similarly like other matchers such as Brute Force Matcher, Flann Based Matcher, etc.
2) Pairwise Matching works with multiple images unlike the above matchers. You have to go one by one if you want to use them for multiple images.
3) In Pairwise Matching, the features of one image are matched with every other image in the data set.
4) BestOf2NearestMatcher finds two best matches for each feature and leaves the best one only if the ratio between descriptor distances is greater than the threshold match_conf.
What I want to know:
1) I want to know more details about pairwise matching, if I'm missing some on it.
2) I want to know HOW pairwise matching works, the actual flow of it in detail.
3) I want to know HOW BestOf2NearestMatcher works, the actual flow of it in detail.
4) Where can I find code for BestOf2NearestMatcher? OR Where can I get similar code to BestOf2NearestMatcher?
5) Is there any alternative I can use for pairwise matching (or BestOf2NearestMatcher) which takes less time than the current one?
Why I want to know and what I'd do with it:
1) As I stated in the introduction part, I want to reduce the time pairwise matching takes. If I'm able to understand what actually pairwise matching is and how it works, I can create my own according to my requirement or I can modify the existing one.
Here's where I posted a question in which I want to reduce time for the entire program: here. I'm not asking the same question again, I'm asking about specifics here. There I wanted to know how can I reduce time in pairwise matching as well as other code sections and here I want to know what pairwise matching is and how it works.
Any help is much appreciated!
EDIT: I found the code of pairwise matching in matchers.cpp. I created my own function in the main code to optimize the time. Works good.

Weka improve model TP Rate

j48 weka
Hi,
I have problem with my model in weka (j48 cross-validation) that many instances are classified wrong when it comes to the second class. Is there any way to improve it or rather not? I'm not an expert in weka. Thank you in advance. My output is above.
In NaiveBayes it presents better but still TP Rate < 0.5 for the second class.
NaiveByes weka
It is hard to reproduce your example with the given information. However the solution is probably to turn your classifiert into a cost sensitive classifier
https://weka.wikispaces.com/CostSensitiveClassifier?responseToken=019a566fb2ce3b016b9c8c791c92e8e35
What it does it assigns a higher value to misclassifications of a certain class. In your case this would be the "True" class.
You can also simulate such an algorithm by oversampling your positive examples. This is, if you have n positive examples you sample k*n positive example, while you keep your negative examples as they are. You could also simply double positive examples.