I want to solve the following modified transport equation for the calculation of sensitivities:
Within interFoam, I implemented the following code right after solving alphaEqn.H (along with initial fields dAlpha and dU, calculation of dPhi and other modifications):
{
word dAlphaScheme("div(phi,dAlpha)");
word dPhiScheme("div(dPhi,alpha)");
#include "alphaSuSp.H"
surfaceScalarField dAlphaPhi
(
fvc::flux(phi, dAlpha, dAlphaScheme)
+ fvc::flux(dPhi, alpha1, dPhiScheme)
);
MULES::explicitSolve
(
geometricOneField(),
dAlpha,
dAlphaPhi,
Sp,
Su
);
}
It works, but I'm not sure if MULES::explicitSolve is the right way to solve this equation. After a few time steps the solution smears heavily. What would you recommend?
cfd-online is probably your best bet as I mentioned in the comments above. It looks like you're trying to implement an additional scalar transport equation, although it's not clear what physics you're calculating when you say 'sensitivities'.
Take a look at the example here on adding additional scalar transport equations to OpenFOAM solvers: https://openfoamwiki.net/index.php/How_to_add_temperature_to_icoFoam
It is out of date as it is for OpenFOAMv1.7, but the concept is 99.999% the same:
Create the field
Add appropriate transport coefficients to dictionaries
Add appropriate schemes to fvSchemes
Define the transport equation
Use the .solve() function on the transport equation
Related
Edit: TL;DR version: how to get all possible backtraces for Damerau–Levenshtein distance between two words? I'm using https://en.wikipedia.org/wiki/Wagner%E2%80%93Fischer_algorithm in order to compute distance, and trivial backtrace algorithm (illustrated below) in order to reconstruct corrections list.
More details below:
Just got stuck with optimal string alignment (sort of Damerau–Levenshtein distance) while trying to get a complete set of possible alignments.
Goal is to align 2 strings for further comparison in auto-suggestions algorithm. Particularly, I'd like to ignore insertions past the end of 1st word.
The problem that in some cases multiple "optimal" alignments is possible, e.g.
align("goto", "go to home")
1) go to
go to home
2) go t o
go to home
Unfortunately, mine implementation finds second variant only, while I need both or 1st one.
I've tried to perform some kind of A* or BFS path finding, but it looks like cost computation matrix is "tuned" for (2) variant only. There is screenshot below where I can find red path, but it looks like there is no green path:
However, someone made a web demo which implements exactly what I want:
What I'm missing here?
Perhaps my implementation is too long to post it here, so there is a link to github: https://github.com/victor-istomin/incrementalSpellCheck/blob/f_improvement/spellCheck.hpp
Distance implementation is located in optimalStringAlignementDistance() and optimalStringAlignmentBacktrace() methods.
Is there a way to incorporate the uncertainties on my data set into the result of the Savitzky Golay fit? Since I am not passing this information into the function, I asume that it is simply calcuating the 'best fit' via an unweighted least-squares process. I am currently working with data that has non-uniform uncertainty, and so the fit of the data could be improved by including the errors that I have for my main dataset.
The wikipedia page for the Savitzky-Golay filter suggests how I might go about alter the process of calculating the coefficients of the fit, and I am staring at the code for scipy.signal.savgol_filter, but I cannot get my head around what I need to adjust so that this will do what I want it to.
Are there any ready-made weighted SG filters floating about? I find it hard to believe that no-one else has ever needed this tool in Python, but maybe I have missed something.
Check out this Python module: https://github.com/surhudm/savitzky_golay_with_errors
This python script improves upon the traditional Savitzky-Golay filter
by accounting for errors or covariance in the data. The inputs and
arguments are all modelled after scipy.signal.savgol_filter
Matlab function sgolayfilt supports weights. Check the documentation.
j48 weka
Hi,
I have problem with my model in weka (j48 cross-validation) that many instances are classified wrong when it comes to the second class. Is there any way to improve it or rather not? I'm not an expert in weka. Thank you in advance. My output is above.
In NaiveBayes it presents better but still TP Rate < 0.5 for the second class.
NaiveByes weka
It is hard to reproduce your example with the given information. However the solution is probably to turn your classifiert into a cost sensitive classifier
https://weka.wikispaces.com/CostSensitiveClassifier?responseToken=019a566fb2ce3b016b9c8c791c92e8e35
What it does it assigns a higher value to misclassifications of a certain class. In your case this would be the "True" class.
You can also simulate such an algorithm by oversampling your positive examples. This is, if you have n positive examples you sample k*n positive example, while you keep your negative examples as they are. You could also simply double positive examples.
I'm digging up some info about filtering the noise out of my IQ data samples in C++.
I have learned that this can be done by using a simple filter which calculates the average of last few data samples and applies it to the current sample.
Do you have any further experience with this kind of filtering or do you recommend using some existing FIR filtering library?
Thanks for your comments!
Unfortunately, it is not as simple as "just get some library and it will do all the work for you"; digital filters is a quite complicated subject.
It is easy to apply digital filter to your data only if your measurements come at fixed time intervals (known as "sample rate" in digital filters). Otherwise (if time intervals vary), it is not trivial to apply digital filters (and I suspect you might need FFT to do it, but I might be wrong here).
Digital filters (both IIR and FIR) are interesting in that as soon as you know coefficients, you don't really need a library, it is easy to write it yourself (see, for example, first picture here: https://en.wikipedia.org/wiki/Finite_impulse_response : looks simple, right?); it is finding coefficients which is tricky.
As a prerequisite to find out coefficients, you need to understand quite a lot about filters: you need to know what kind of filter you need (if it is after demodulation - you'll likely need low-pass, otherwise see comment by MSalters below), you need to understand what "corner frequency" is, and you need to realize how to map those frequencies to your samples (for example, you can say that your samples are coming once per second - or at any other rate, but this choice will affect your desired "corner frequency"). As soon as you've got this understanding of "what you need in terms of digital filters" - finding coefficients is quite easy, you can do it either in MatLab, or using online calculator, look for "digital filter calculator" in Google.
I am working user behavior project. Based on user interaction I have got some data. There is nice sequence which smoothly increases and decreases over the time. But there are little discrepancies, which are very bad. Please refer to graph below:
You can also find data here:
2.0789 2.09604 2.11472 2.13414 2.15609 2.17776 2.2021 2.22722 2.25019 2.27304 2.29724 2.31991 2.34285 2.36569 2.38682 2.40634 2.42068 2.43947 2.45099 2.46564 2.48385 2.49747 2.49031 2.51458 2.5149 2.52632 2.54689 2.56077 2.57821 2.57877 2.59104 2.57625 2.55987 2.5694 2.56244 2.56599 2.54696 2.52479 2.50345 2.48306 2.50934 2.4512 2.43586 2.40664 2.38721 2.3816 2.36415 2.33408 2.31225 2.28801 2.26583 2.24054 2.2135 2.19678 2.16366 2.13945 2.11102 2.08389 2.05533 2.02899 2.00373 1.9752 1.94862 1.91982 1.89125 1.86307 1.83539 1.80641 1.77946 1.75333 1.72765 1.70417 1.68106 1.65971 1.64032 1.62386 1.6034 1.5829 1.56022 1.54167 1.53141 1.52329 1.51128 1.52125 1.51127 1.50753 1.51494 1.51777 1.55563 1.56948 1.57866 1.60095 1.61939 1.64399 1.67643 1.70784 1.74259 1.7815 1.81939 1.84942 1.87731
1.89895 1.91676 1.92987
I would want to smooth out this sequence. The technique should be able to eliminate numbers with characteristic of X and Y, i.e. error in mono-increasing or mono-decreasing.
If not eliminate, technique should be able to shift them so that series is not affected by errors.
What I have tried and failed:
I tried to test difference between values. In some special cases it works, but for sequence as presented in this the distance between numbers is not such that I can cut out errors
I tried applying a counter, which is some X, then only change is accepted otherwise point is mapped to previous point only. Here I have great trouble deciding on value of X, because this is based on user-interaction, I am not really controller of it. If user interaction is such that its plot would be a zigzag pattern, I am ending up with 'no user movement data detected at all' situation.
Please share the techniques that you are aware of.
PS: Data made available in this example is a particular case. There is no typical pattern in which numbers are going to occure, but we expect some range to be continuous with all the examples. Solution I am seeking is generic.
I do not know how much effort you want to involve in this problem but if you want theoretical guaranties,
topological persistence seems well adapted to your problem imho.
Basically with that method, you can filtrate local maximum/minimum by fixing a scale
and there are theoritical proofs that says that if you sampling is
close from your function, then you extracts correct number of maximums with persistence.
You can see these slides (mainly pages 7-9 to get the idea) to get an idea of the method.
Basically, if you take your points as a landscape and imagine a watershed starting from maximum height and decreasing, you have some picks.
Every pick has a time where it is born which is the time where it becomes emerged and a time where it dies which is when it merges with an higher pick. Now a persistence diagram pictures a point for every pick where its x/y coordinates are its time of birth/death (by assumption the first pick does not die and is not shown).
If a pick is a global maximal, then it will be further from the diagonal in the persistence diagram than a local maximum pick. To remove local maximums you have to remove picks close to the diagonal. There are fours local maximums in your example as you can see with the persistence diagram of your data (thanks for providing the data btw) and two global ones (the first pick is not pictured in a persistence diagram):
If you noise your data like that :
You will still get a very decent persistence diagram that will allow you to filter local maximum as you want :
Please ask if you want more details or references.
Since you can not decide on a cut off frequency, and not even on the filter you want to use, I would implement several, and let the user set the parameters.
The first thing that I thought of is running average, and you can see that there are so many things to set, to get different outputs.