Getting derivative by "sympy" - sympy

I am trying to get the derivative of a equation.
I found a library named "sympy" can do this, however, I kept getting error while using this.
This is my code:
from sympy import *
x = symbols('x')
diff(cos(x), x)
This is the error:
importing sympy.geometry.util with 'from sympy import *' has been
deprecated since SymPy 1.6. Use import sympy.geometry.util instead.
See https://github.com/sympy/sympy/issues/18245 for more info.
self.Warn(
I try to replace 'from sympy import *' with 'import sympy.geometry.util' but still doesn't work.
This is the error after I replace:
importing sympy.geometry.util with 'from sympy import *' has been
deprecated since SymPy 1.6. Use import sympy.geometry.util instead.
See https://github.com/sympy/sympy/issues/18245 for more info.
self.Warn(
How can I solve this?

I tried to run your code on my compiler. It is working fine.
Try reinstalling 'sympy'

Related

Why can't I import WD_ALIGN_PARAGRAPH from docx.enum.text?

I transferred some code from IDLE 3.5 (64 bits) to pycharm (Python 2.7). Most of the code is still working, for example I can import WD_LINE_SPACING from docx.enum.text, but for some reason I can't import WD_ALIGN_PARAGRAPH.
At first, nearly non of the imports worked, but after I did
pip install python-docx
instead of
pip install docx
most of the imports worked except for WD_ALIGN_PARAGRAPH.
# works
from __future__ import print_function
import xlrd
import xlwt
import os
import subprocess
from calendar import monthrange
import datetime
from docx import Document
from datetime import datetime
from datetime import date
from docx.enum.text import WD_LINE_SPACING
from docx.shared import Pt
# does not work
from docx.enum.text import WD_ALIGN_PARAGRAPH
I don't get any error messages but Pycharm marks the line as error:
"Cannot find reference 'WD_ALIGN_PARAGRAPH' in 'text.py'".
You can use this instead:
from docx.enum.text import WD_PARAGRAPH_ALIGNMENT
and then substitute WD_PARAGRAPH_ALIGNMENT wherever WD_ALIGN_PARAGRAPH would have appeared before.
The reason this is happening is that the actual enum object is named WD_PARAGRAPH_ALIGNMENT, and a decorator is applied that also allows it to be referenced as WD_ALIGN_PARAGRAPH (which is a little shorter, and possibly clearer). I expect the syntax checker in PyCharm is operating on direct module attributes and doesn't pick up the alias, which is resolved by the Python parser/compiler.
Interestingly, I expect your code would work fine either way. But to get rid of the annoying message you can use the base name.
If someone uses pylint it can be easily suppressed with # pylint: disable=E0611 added at the end of the import line.

How can I specify a non-theano based likelihood?

I saw a post from a few days ago by someone else: pymc3 likelihood math with non-theano function. Even though I think the problem at its core is the same, I thought I would ask with a simpler example:
Inside logp_wrap, I put some made up definition of a likelihood function. It depends on the rv and an observation. In this case I could do this with theano operations, but let's say that I want this function to be more complex and so I cannot use theano.
The problem comes when I try to define the likelihood both in terms of an RV and observations. From what I have seen, this format would work if I was specifying everything in 'logp_wrap' as theano operations.
I have searched around for a solution to this, but haven't found anything where this problem is fully addressed.
The problem in my attempt to do this is actually that the logp_ function is correctly decorated, but the logp_wrap function is only correctly decorated for its input, and not for its output, so I get the error
TypeError: 'TensorVariable' object is not callable.
Would be great if someone had a solution - don't think I am the only one with this problem.
The theano version of this that works (and uses the same function within a function definition) without the #as_op code is here: https://pymc-devs.github.io/pymc3/notebooks/lda-advi-aevb.html?highlight=densitydist (Specifically the sections: "Log-likelihood of documents for LDA" and "LDA model section")
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
"""
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pymc3 as pm
from theano import as_op
import theano.tensor as T
from scipy.stats import norm
#Some data that we observed
g_observed = [0.0, 1.0, 2.0, 3.0]
#Define a function to calculate the logp without using theano
#This as_op is where the problem is - the input is an rv but the output is a
#function.
#as_op(itypes=[T.dscalar],otypes=[T.dscalar])
def logp_wrap(rv):
#We are not using theano so we wrap the function.
#as_op(itypes=[T.dvector],otypes=[T.dscalar])
def logp_(ob):
#Some made up likelihood -
#The key here is that lp depends on the rv input and the observations
lp = np.log(norm.pdf(rv + ob))
return lp
return logp_
hb1_model = pm.Model()
with hb1_model:
I_mean = pm.Normal('I_mean', mu=0.1, sd=0.05)
xs = pm.DensityDist('x', logp_wrap(I_mean),observed = g_observed)
with hb1_model:
step = pm.Metropolis()
trace = pm.sample(1000, step)

Using c++ complex functions in Cython

I am trying to do a complex exponential in Cython.
I have been able to cobble together the following code for my pyx:
from libc.math cimport sin, cos, acos, exp, sqrt, fabs, M_PI, floor, ceil
cdef extern from "complex.h":
double complex cexp(double complex z)
import numpy as np
cimport numpy as np
import cython
from cython.parallel cimport prange, parallel
def try_cexp():
cdef:
double complex rr1
double complex rr2
rr1 = 1j
rr2 = 2j
print(rr1*rr2)
#print(cexp(rr1))
Note that the print(cexp(rr1)) is commented. When the line is active, I get the following error when running setup.py:
error: command 'C:\\WinPYthon\\Winpython-64bit-3.4.3.6\\python-3.4.3.amd64\\scripts\\gcc.exe' failed with exit status 1
Note that when cexp is commented out, everything runs expected... I can run setup.py, and when I test the function it prints out the product of the two complex numbers.
Here is my setup.py file. Note that it includes code to run openmp in Cython using g++:
from distutils.core import setup
from Cython.Build import cythonize
from distutils.extension import Extension
from Cython.Distutils import build_ext
import numpy as np
import os
os.environ["CC"] = "g++-4.7"
os.environ["CXX"] = "g++-4.7"
# These were added based on some examples I had seen of cexp in Cython. No effect.
#import pyximport
#pyximport.install(reload_support=True)
ext_modules = [
Extension('complex_test',
['complex_test.pyx'],
language="c++",
extra_compile_args=['-fopenmp'],
extra_link_args=['-fopenmp', '-lm']) # Note that '-lm' was
# added due to an example where someone mentioned g++ required
# this. Same results with and without it.
]
setup(
name='complex_test',
cmdclass={'build_ext': build_ext},
ext_modules=ext_modules,
include_dirs=[np.get_include()]
)
Ultimately my goal is to speed up a calcualtion that looks like k*exp(z), where k and z are complex. Currently I am using Numerical expressions, however that has a large memory overhead, and I believe that it's possible to optimize further than it can.
Thank you for your help.
You're using cexp instead of exp as it is in C++. Change your cdef extern to:
cdef extern from "<complex.h>" namespace "std":
double complex exp(double complex z)
float complex exp(float complex z) # overload
and your print call to:
print(exp(rr1))
and it should work as a charm.
I know the compilation messages are lengthy, but in there you can find the error that points to the culprit:
complex_test.cpp: In function ‘PyObject* __pyx_pf_12complex_test_try_cexp(PyObject*)’:
complex_test.cpp:1270:31: error: cannot convert ‘__pyx_t_double_complex {aka std::complex<double>}’ to ‘__complex__ double’ for argument ‘1’ to ‘__complex__ double cexp(__complex__ double)’
__pyx_t_3 = cexp(__pyx_v_rr1);
It's messy, but you can see the cause. *You're supplying a C++ defined type (__pyx_t_double_complex in Cython Jargon) to a C function which expects a different type (__complex__ double).

specify Atspi version before import

I use this python library which uses pyatspi (from pyatspi import …). When I run it in (L)Ubuntu 16.04, it throws the following error:
/usr/lib/python2.7/dist-packages/pyatspi/__init__.py:17: PyGIWarning: Atspi was imported without specifying a version first. Use gi.require_version('Atspi', '2.0') before import to ensure that the right version gets loaded.
from gi.repository import Atspi
Although this error message says exactly what I should do to, it doesn't work just to add the line gi.require_version('Atspi', '2.0') in /usr/lib/python2.7/dist-packages/pyatspi/__init__.py (giving NameError: name 'gi' is not defined) – what am I doing wrong?
It's necessary to import require_version from gi first, so just add:
from gi import require_version
require_version('Atspi', '2.0')
before the
from gi.repository import Atspi
line in the file given by the error message, which was /usr/lib/python2.7/dist-packages/pyatspi/__init__.py here.

ImportError: No module named stanford_segmenter

The StanfordSegmenter does not have an interface in nltk, different from the case of StanfordPOStagger or StanfordNER. So to use it, basically I have to create an interface manually for StanfordSegmenter, namely stanford_segmenter.py under ../nltk/tokenize/. I follow the instructions here http://textminingonline.com/tag/chinese-word-segmenter
However, when I tried to run this from nltk.tokenize.stanford_segmenter import stanford_segmenter, I got an error
msg Traceback (most recent call last):
File "C:\Users\qubo\Desktop\stanfordparserexp.py", line 48, in <module>
from nltk.tokenize.stanford_segmenter import stanford_segmenter
ImportError: No module named stanford_segmenter
[Finished in 0.6s]
The instructions mentioned to reinstall nltk after creating the stanford_segmenter.py. I don't quite get the point but so I did. However, the process can hardly be called 'reinstall', but rather a detaching and reconnecting nltk to python libs.
I'm using Windows 64 and Python 2.7.11. NLTK and all relevant pkgs are updated to the latest version. Wonder if you guys can shed some light on this. Thank you all so much.
I was able to import the module by running the following code:
import imp
yourmodule = imp.load_source("module_name.py", "/path/to/module_name.py")
yourclass = yourmodule.TheClass()
yourclass is an instance of the class and TheClass is the name of the class you want to create the obj in. This is similar to the use of:
from pkg_name.module_name import TheClass
So in the case of StanfordSegmenter, the complete lines of code is as follows:
# -*- coding: utf-8 -*-
import imp
import os
ini_path = 'D:/jars/stanford-segmenter-2015-04-20/'
os.environ['STANFORD_SEGMENTER'] = ini_path + 'stanford-segmenter-3.5.2.jar'
stanford_segmenter = imp.load_source("stanford_segmenter", "C:/Users/qubo/Miniconda2/pkgs/nltk-3.1-py27_0/Lib/site-packages/nltk/tokenize/stanford_segmenter.py")
seg = stanford_segmenter.StanfordSegmenter(path_to_model='D:/jars/stanford-segmenter-2015-04-20/data/pku.gz', path_to_jar='D:/jars/stanford-segmenter-2015-04-20/stanford-segmenter-3.5.2.jar', path_to_dict='D:/jars/stanford-segmenter-2015-04-20/data/dict-chris6.ser.gz', path_to_sihan_corpora_dict='D:/jars/stanford-segmenter-2015-04-20/data')
sent = '我有一只小毛驴我从来也不骑。'
text = seg.segment(sent.decode('utf-8'))