Symbolic Integration of Gaussian in SymPy - sympy

I am trying to integrate a function of the form cos(x)*exp(-x²) in SymPy, but I am getting no result. The result would involve some sort of error-function. If I try to integrate just a gaussian (for example with integrate(exp(-x**2), x)), I get an output in its error-function form. Is there any tricks so SymPy can handle more advanced functions?

Related

Does the SymPy function integration_steps reveal the result if the integration?

I'm using the SymPy function integral_steps to build a tool that, just like SymPy Gamma, reveals the integration steps when you ask it to integrate a function. My work-in-progress is available at https://lem.ma/1YH.
What I can't quite figure out is how to obtain the result of applying a particular rule. For example, consider the substitution rule
URule(u_var=_u, u_func=sin(x), constant=1, substep=ExpRule(base=E, exp=_u, context=exp(_u), symbol=_u), context=exp(sin(x))*cos(x), symbol=x)
The context field tells that the function being integrated is exp(sin(x))*cos(x) and that the rule uses a particular substitution - but what's the result of the integration so I can report to the user the same way SymPy Gamma does it. What I currently do is call integrate at every step, but that seems quite inefficient.
Perhaps there's an option that one can pass to integral_steps to make that information available?
SymPy Gamma is open source as SymPy itself. Looking at its module intsteps I see lines like
self.append("So, the result is: {}".format(
self.format_math(_manualintegrate(rule))))
So, the way to obtain the outcome of a rule from the rule is to call _manualintegrate(rule), which needs to be imported as
from sympy.integrals.manualintegrate import _manualintegrate
I imagine reading the rest of intsteps.py will be useful as well.

How can I print the output of a hidden layer in Lasagne

I am trying to use lasgne to train a simple neural network, and to use my own C++ code to do inference. I use the weights generated by lasgne, but I am not able to get good results. Is there a way I can print the output of a hidden layer and/or the calculations themselves? I want to see who it works under the hood, so I can implement it the same way in C++.
I can help with Lasagne + Theano in Python, I am not sure from your question whether you fully work in C++ or you only need the results from Python + Lasagne in your C++ code.
Let's consider you have a simple network like this:
l_in = lasagne.layers.InputLayer(...)
l_in_drop = lasagne.layers.DropoutLayer(l_in, ...)
l_hid1 = lasagne.layers.DenseLayer(l_in_drop, ...)
l_out = lasagne.layers.DenseLayer(l_hid1, ...)
You can get the output of each layer by calling the get_output method on a specific layer:
lasagne.layers.get_output(l_in, deterministic=False) # this will just give you the input tensor
lasagne.layers.get_output(l_in_drop, deterministic=True)
lasagne.layers.get_output(l_hid1, deterministic=True)
lasagne.layers.get_output(l_out, deterministic=True)
When you are dealing with dropout and you are not in the training phase, it's important to remember to call get_output method with the deterministic parameter set to True, to avoid non-deterministic behaviours. This applies to all layers that are preceded by one or more dropout layers.
I hope this answers your question.

Using Microsoft Solver Foundation to solve a linear programming task requiring thousands of data points

Using Microsoft Solver Foundation,I am trying to solve a linear program of the form Ax <= b where A is a matrix containing thousands of data points.
I know that I can new up a Model object and then use the AddConstraint method to add constraints in equation form. However putting those equations together where each contains thousands of variables is just not possible. I looked at the Model Class and can not find a way to just give it the matrix and other info.
How can I do this?
Thanks!
You can make A a parameter and bind data to it. Warning: Microsoft Solver Foundation has been discontinued a while ago, so you are advised to consider an alternative modeling system.

How to perform integration of a number in python2.7

Someone please help me to integrate a_z as shown in the picture using Python2.7
Is there any predefined method to perform the action.Thank you
Try this:
http://docs.scipy.org/doc/scipy/reference/tutorial/integrate.html
for numerical integration.
In your particular example, a_z is a linear function of t and thus you dont need any numerical methods, you can do a symbolic integral and just compute the resultant function, which is much more efficient.
In fact, the solution is already printed there, so not sure what else you need to know?

NLTK wrapper for Weka to build a classifier

I'm building a Named Entity classifier with nltk and I have my focus on location retrieval (of any type, from countries to museums, restaurants or roads). I'm trying to vary featuresets and methods I use.
For now, I've used NLTK's built-in Maxent, NaiveBayes, PositiveNaiveBayes, DecisionTrees and SVM. I'm using 40 different combinations of featuresets.
Maxent seems to be the best, but it's too slow. nltk's SVM is for binary classification and I had some issues with pickling the final classifier. Then I tried nltk's wrapper for scikit-learn SVM, but it didn't accept my inputs, I tried to adapt but had some float coercion problem.
Now, I'm considering to use nltk's wrapper for Weka, but I don't know if it could give me some extremely different result worthy to try and don't have to much time. My question is, what advantages Weka has over nltk's built-in classifiers?