I'm on python 3.3 and I have to test a method which use call from subprocess.py.
I tried:
subprocess.call = MagicMock()
with patch('subprocess.call') as TU_call:
but in debug mode I found that python call effectively subprocess.call
Works fine for me (Ubuntu 13.04, Python 3.3.1):
$ python3.3
Python 3.3.1 (default, Sep 25 2013, 19:29:01)
[GCC 4.7.3] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import mock
>>> import subprocess
>>> result = subprocess.call('date')
Fri Jan 3 19:45:32 CET 2014
>>> subprocess.call = mock.create_autospec(subprocess.call, return_value='mocked!')
>>> result = subprocess.call('date')
>>> print(result)
mocked!
>>> subprocess.call.mock_calls
[call('date')]
I believe this question is about the usage of this particular mock package
General statements, unrelated to your direct question
Wrote this up before I understood that the question is specifically about the use of the python mock package.
One general way to mock functions is to explicitly redefine the function or method:
$ python3.3
Python 3.3.1 (default, Sep 25 2013, 19:29:01)
[GCC 4.7.3] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import subprocess
>>> subprocess.call('date')
Fri Jan 3 19:23:25 CET 2014
0
>>> def mocked_call(*a, **kw):
... return 'mocked'
...
>>> subprocess.call = mocked_call
>>> subprocess.call('date')
'mocked'
The big advantage of this straightforward approach is that this is free of any package dependencies. The disadvantage is that if there are specific needs, all the decision making logic has to be coded manually.
As an example of mocking packages, FlexMock is available for both Python 2.7 and Python 3.* and its usage of overriding subprocess.call is discussed in this question
This work for subprocess.check_output in python3
#mock.patch('subprocess.check_output', mock.mock_open())
#mock.patch('subprocess.Popen.communicate')
def tst_prepare_data_for_matrices(self, makedirs_mock, check_output_mock):
config_file = open(os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir)+'/etc/test/config.json')).read()
check_output_mock.return_value = ("output", "Error")
Related
I tried to use both openslide and pyvips and my application doesn't find the necesary .dll. I think it is a problem of using both librarys.
I have read that pyvips has openslide embed but I can't find how to use it. The main purpose for this is to read Whole Slide Images and see the different levels and augmentations, and work with them.
I'd really appreciate your help! Thank you
Yes, pyvips usually includes openslide, so you can't use both together.
Use .get_fields() to see all the metadata on an image, for example:
$ python3
Python 3.9.7 (default, Sep 10 2021, 14:59:43)
[GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pyvips
>>> x = pyvips.Image.new_from_file("openslide/CMU-1.svs")
>>> x.width
46000
>>> x.height
32914
>>> x.get_fields()
['width', 'height', 'bands', 'format', 'coding', 'interpretation', 'xoffset', 'yoffset',
'xres', 'yres', 'filename', 'vips-loader', 'slide-level', 'aperio.AppMag', 'aperio.Date',
'aperio.Filename', 'aperio.Filtered', 'aperio.Focus Offset', 'aperio.ICC Profile',
'aperio.ImageID', 'aperio.Left', 'aperio.LineAreaXOffset', 'aperio.LineAreaYOffset',
...
pyvips will open base level of the image by default (the largest), use level= to pick other levels, perhaps:
>>> x = pyvips.Image.new_from_file("openslide/CMU-1.svs", level=2)
>>> x.width
2875
See the docs for details:
https://www.libvips.org/API/current/VipsForeignSave.html#vips-openslideload
I would like to decode MACCYRILLIC code, for example "%EE%F2_%E4%EE%E1%F0%E0_%E4%EE%E1%F0%E0_%ED%E5_%E8%F9%F3%F2". How can I do it using Python2?
phrase.decode("MACCYRILLIC") has no effect.
urllib — Open arbitrary resources by URL
urllib.unquote(string)
Replace %xx escapes by their single-character equivalent.
Example: unquote('/%7Econnolly/') yields '/~connolly/'.
==> py -2
Python 2.7.12 (v2.7.12:d33e0cf91556, Jun 27 2016, 15:24:40) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
>>> import urllib
>>> MACCYRILLIC = "%EE%F2_%E4%EE%E1%F0%E0_%E4%EE%E1%F0%E0_%ED%E5_%E8%F9%F3%F2"
>>> print urllib.unquote(MACCYRILLIC).decode('cp1251')
от_добра_добра_не_ищут
>>>
Edit. Another approach (step by step):
==> py -2
Python 2.7.12 (v2.7.12:d33e0cf91556, Jun 27 2016, 15:24:40) [MSC v.1500 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib
>>> import codecs
>>> MACCYRILLIC = '%EE%F2_%E4%EE%E1%F0%E0_%E4%EE%E1%F0%E0_%ED%E5_%E8%F9%F3%F2'
>>> #print
... x = urllib.unquote(MACCYRILLIC) #.decode('cp1251')
>>> print repr(x)
'\xee\xf2_\xe4\xee\xe1\xf0\xe0_\xe4\xee\xe1\xf0\xe0_\xed\xe5_\xe8\xf9\xf3\xf2'
>>> y = codecs.decode(x, 'cp1251')
>>> print y
от_добра_добра_не_ищут
>>>
All above would work on the following requirement:
>>> import sys
>>> sys.stdout.encoding
'utf-8'
>>> print sys.stdout.encoding
utf-8
>>>
Unfortunately, the example in http://rextester.com/XAX79891 shows sys.stdout.encoding None (and I don't know a way of changing it to utf-8). Read more in Lennart Regebro's answer to Stdout encoding in python:
A better generic solution under Python 2 is to treat stdout as what
it is: An 8-bit interface. And that means that anything you print to
to stdout should be 8-bit. You get the error when you are trying to
print Unicode data, because print will then try to encode the Unicode
data to the encoding of stdout, and if it's None it will assume
ASCII, and fail, unless you set PYTHONIOENCODING.
I'm trying to use the intcomma to format my number in template, but it cannot work properly.
{%load humanize%}
{%blocktrans with val=myvalue|intcomma%}The number is {{val}}{%endblocktrans%}
After some searching, I found the django.utils.formats.number_format is not function. Hereunder is my testing:
corpweb#56944bf480d1:~$ ./manage.py shell
Python 3.4.4 (default, Feb 17 2016, 02:50:56)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> import locale
>>> from django.utils.formats import number_format
>>> val=123456789
>>> number_format(val,force_grouping=True)
'123456789'
>>> locale.getlocale()
('en_US', 'UTF-8')
>>>
Is there have anything I setup wrong?
When rendering templates or using number_format outside of the Django app flow the translation module is not activated. Here are a few notes and instructions on how to turn on translation in custom management commands.
To make the shell example work we just need to activate the translation module as such:
(venv) $ ./manage.py shell
Python 3.6.4 (default, Mar 1 2018, 18:36:50)
>>> from django.utils.formats import number_format
>>> from django.utils import translation
>>> translation.activate('en-us')
>>> number_format(50000, force_grouping=True)
'50,000'
The key line above is: translation.activate('en-us')
Everything is ok with your setup I guess. Just set USE_L10N = True in your settings.py if it is set to False, as #Tim Schneider mentioned, and you better try it like this {{ val|intcomma }} as #Leonard2 mentioned and it must work. And also as it is mentioned here make sure:
To activate these filters, add 'django.contrib.humanize' to your INSTALLED_APPS setting.
I'm a bit lost on how to extract coordinates (Lat, Long) from a URL in Python.
Always I'll recive a url like this:
https://www.testweb.com/cordi?ll=41.403781,2.1896&z=17&pll=41.403781,2.1896
Where I need to extract the second set of this URL (in this case: 41.403781,2.1896) Just to say, that not always the first and second set of coords will be the same.
I know, that can be done with some regex, but I'm not good enough on it.
Here's how to do it with a regular expression:
import re
m = re.search(r'pll=(\d+\.\d+),(\d+\.\d+)', 'https://www.testweb.com/cordi?ll=41.403781,2.1896&z=17&pll=41.403781,2.1896')
print m.groups()
Result: ('41.403781', '2.1896')
You might want look at the module urlparse for a more robust solution.
urlparse has a functions "urlparse" and "parse_qs" for accessing this data reliably, as shown below
$ python
Python 2.6.6 (r266:84292, Jul 23 2015, 15:22:56)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> u="""https://www.testweb.com/cordi?ll=41.403781,2.1896&z=17&pll=41.403781,2.1896"""
>>> import urlparse
>>> x=urlparse.urlparse(u)
>>> x
ParseResult(scheme='https', netloc='www.testweb.com', path='/cordi', params='', query='ll=41.403781,2.1896&z=17&pll=41.403781,2.1896', fragment='')
>>> x.query
'll=41.403781,2.1896&z=17&pll=41.403781,2.1896'
>>> urlparse.parse_qs(x.query)
{'ll': ['41.403781,2.1896'], 'z': ['17'], 'pll': ['41.403781,2.1896']}
>>>
I want to use Stanford NER in python using pyner library. Here is one basic code snippet.
import ner
tagger = ner.HttpNER(host='localhost', port=80)
tagger.get_entities("University of California is located in California, United States")
When I run this on my local python console(IDLE). It should have given me an output like this
{'LOCATION': ['California', 'United States'],
'ORGANIZATION': ['University of California']}
but when I execut this, it showed empty brackets. I am actually new to all this.
I am able to run the stanford-ner server in socket mode using:
java -mx1000m -cp stanford-ner.jar edu.stanford.nlp.ie.NERServer \
-loadClassifier classifiers/english.muc.7class.distsim.crf.ser.gz \
-port 8080 -outputFormat inlineXML
and receive the following output from the command line:
Loading classifier from
/Users/roneill/stanford-ner-2012-11-11/classifiers/english.muc.7class.distsim.crf.ser.gz
... done [1.7 sec].
Then in python repl:
Python 2.7.2 (default, Jun 20 2012, 16:23:33)
[GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import ner
>>> tagger = ner.SocketNER(host='localhost', port=8080)
>>> tagger.get_entities("University of California is located in California, United States")
{'ORGANIZATION': ['University of California'], 'LOCATION': ['California', 'United States']}