Pyomo - Location of Log Files - pyomo

Pretty basic question, but where can I find solver log files from Pyomo? I have a local installation of the COIN-OR solvers on an Ubuntu machine.
This is happening in a Jupyter notebook, but I'm getting the same error message when I run the .py file from terminal.
solverpath_exe='~/COIN-OR/bin/couenne'
opt = SolverFactory('couenne', executable = solverpath_exe)
opt.solve(model,tee=True)
---------------------------------------------------------------------------
ApplicationError Traceback (most recent call last)
<ipython-input-41-48380298846e> in <module>()
29 #instance = model.create_instance()
30 opt = SolverFactory('couenne', executable = solverpath_exe)
---> 31 opt.solve(model,tee=True)
32 #solver=SolverFactory(solvername,executable=solverpath_exe)
/home/ralphasher/.local/lib/python3.6/site-packages/pyomo/opt/base/solvers.py in solve(self, *args, **kwds)
598 logger.error("Solver log:\n" + str(_status.log))
599 raise pyutilib.common.ApplicationError(
--> 600 "Solver (%s) did not exit normally" % self.name)
601 solve_completion_time = time.time()
602 if self._report_timing:
ApplicationError: Solver (asl) did not exit normally

To keep the solver log file, you need to specify that you want to keep them when calling for the solving of your model.
opt.solve(model, tee=True, keepfiles=True)
The resulting file will be next to your main executable.
You can also log the file with a specific name, using
opt.solve(model, tee=True, logfile="some_file_name.log")

Related

Keep Getting Permission denied when using fastai library on AWS setting

I'm learning deep learning by taking a lecture that uses fastai. I'm running fastai library on AWS p2.xlarge. When I ran some function on fastai library I get this error.:
Traceback (most recent call last)
<ipython-input-12-1d86fc0ece07> in <module>()
1 arch = resnet34
2 data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch,sz ))
----> 3 learn = ConvLearner.pretrained(arch, data, precompute = True)
4 learn.fit(0.01, 2)
~/fastai/fastai/conv_learner.py in pretrained(cls, f, data, ps, xtra_fc, xtra_cut, custom_head, precompute, pretrained, **kwargs)
112 models = ConvnetBuilder(f, data.c, data.is_multi, data.is_reg,
113 ps=ps, xtra_fc=xtra_fc, xtra_cut=xtra_cut, custom_head=custom_head, pretrained=pretrained)
--> 114 return cls(data, models, precompute, **kwargs)
115
116 #classmethod
~/fastai/fastai/conv_learner.py in __init__(self, data, models, precompute, **kwargs)
95 def __init__(self, data, models, precompute=False, **kwargs):
96 self.precompute = False
---> 97 super().__init__(data, models, **kwargs)
98 if hasattr(data, 'is_multi') and not data.is_reg and self.metrics is None:
99 self.metrics = [accuracy_thresh(0.5)] if self.data.is_multi else [accuracy]
~/fastai/fastai/learner.py in __init__(self, data, models, opt_fn, tmp_name, models_name, metrics, clip, crit)
35 self.tmp_path = tmp_name if os.path.isabs(tmp_name) else os.path.join(self.data.path, tmp_name)
36 self.models_path = models_name if os.path.isabs(models_name) else os.path.join(self.data.path, models_name)
---> 37 os.makedirs(self.tmp_path, exist_ok=True)
38 os.makedirs(self.models_path, exist_ok=True)
39 self.crit = crit if crit else self._get_crit(data)
~/anaconda3/envs/fastai/lib/python3.6/os.py in makedirs(name, mode, exist_ok)
218 return
219 try:
--> 220 mkdir(name, mode)
221 except OSError:
222 # Cannot rely on checking for EEXIST, since the operating system
PermissionError: [Errno 13] Permission denied: 'data/dogscats/tmp'
I think the AWS console has no permission to make the directory.
I did sudo mkdir tmp data/dogscats/ but I get another error that I couldn't understand.
I think I have to give AWS some permission but I have no clue how to do that.
I hope you guys can give me some clear idea on how to solve this kind of problem.
Fastai creates saves data like current loss etc. in a folder it creates by default the folder is created in the working directory but you can pass the argument path that is the path where you have the privileges to create a folder.

Python2: the meaning of '!../'

Hi I am studying caffe by this tutorial (http://nbviewer.jupyter.org/github/BVLC/caffe/blob/tutorial/examples/00-caffe-intro.ipynb)
I don't know the meaning of '!../' in the code like the following code:
import os
if os.path.isfile(caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'):
print 'CaffeNet found.'
else:
print 'Downloading pre-trained CaffeNet model...'
!../scripts/download_model_binary.py ../models/bvlc_reference_caffenet
# load ImageNet labels (for understanding the output)
labels_file = 'synset_words.txt'
if not os.path.exists(labels_file):
print 'begin'
!../home2/challege98/caffe/data/ilsvrc12/get_ilsvrc_aux.sh
print 'finish'
labels = np.loadtxt(labels_file, str, delimiter='\t')
Could you explain it in detail, when I run the code, there is error that:
Downloading pre-trained CaffeNet model...
/bin/sh: 1: ../scripts/download_model_binary.py: not found
begin
/bin/sh: 1: ../home2/challege98/caffe/data/ilsvrc12/get_ilsvrc_aux.sh: not found
finish
---------------------------------------------------------------------------
IOError Traceback (most recent call last)
<ipython-input-19-8534d29d47f5> in <module>()
12 get_ipython().system(u'../home2/challege98/caffe/data/ilsvrc12/get_ilsvrc_aux.sh')
13 print 'finish'
---> 14 labels = np.loadtxt(labels_file, str, delimiter='\t')
15
16
/usr/local/lib/python2.7/dist-packages/numpy/lib/npyio.pyc in loadtxt(fname, dtype, comments, delimiter, converters, skiprows, usecols, unpack, ndmin)
856 fh = iter(bz2.BZ2File(fname))
857 elif sys.version_info[0] == 2:
--> 858 fh = iter(open(fname, 'U'))
859 else:
860 fh = iter(open(fname))
IOError: [Errno 2] No such file or directory: 'synset_words.txt'
The exclamation point is to run a shell command. See here.
The error you are seeing is because the file synset_words.txt does not exist and is not being created because it cannot find the script to create it. Check this path is correct: ../home2/challege98/caffe/data/ilsvrc12/get_ilsvrc_aux.sh

get the client from pyspark

I want to retrieve a list of file. I saw a post sayong that these commands would do the job:
from hdfs import Config
client = Config().get_client('dev')
client.list('/*')
But actually, execution fails:
---------------------------------------------------------------------------
HdfsError Traceback (most recent call last)
<ipython-input-308-ab40dc16879a> in <module>()
----> 1 client = Config().get_client('dev')
/opt/cloudera/extras/anaconda3/lib/python3.5/site-packages/hdfs/config.py in get_client(self, alias)
117 break
118 else:
--> 119 raise HdfsError('Alias %r not found in %r.', alias, self.path)
120 return self._clients[alias]
121
HdfsError: Alias 'dev' not found in '/home/sbenet/.hdfscli.cfg'.
As you can see, it is trying to access the file /home/sbenet/.hdfscli.cfg which does not exists.
If I want to use this method to retrieve the list of files, I need to fix this .hdfscli.cfg file issue, or to use another method with sc maybe.
You have to create a configuration file first. Check this out 1
[global]
default.alias = dev
[dev.alias]
url = http://dev.namenode:port
user = ann
[prod.alias]
url = http://prod.namenode:port
root = /jobs/

Python 2.7 pickle won't recognize numpy multiarray

I need to load a set of pickled data from a collaborator. Problem is, it seems I need multiarray for this. My code is as below:
f = open('data.p', 'rb')
a = pickle.load(f)
And here is the error message.
ImportError Traceback (most recent call last)
<ipython-input-3-17918c47ae2d> in <module>()
----> 1 a = pk.load(f)
/usr/lib/python2.7/pickle.pyc in load(file)
1382
1383 def load(file):
-> 1384 return Unpickler(file).load()
1385
1386 def loads(str):
/usr/lib/python2.7/pickle.pyc in load(self)
862 while 1:
863 key = read(1)
--> 864 dispatch[key](self)
865 except _Stop, stopinst:
866 return stopinst.value
/usr/lib/python2.7/pickle.pyc in load_global(self)
1094 module = self.readline()[:-1]
1095 name = self.readline()[:-1]
-> 1096 klass = self.find_class(module, name)
1097 self.append(klass)
1098 dispatch[GLOBAL] = load_global
/usr/lib/python2.7/pickle.pyc in find_class(self, module, name)
1128 def find_class(self, module, name):
1129 # Subclasses may override this
-> 1130 __import__(module)
1131 mod = sys.modules[module]
1132 klass = getattr(mod, name)
ImportError: No module named multiarray
I thought it was the problem of the compiled numpy in my computer. So I uninstalled the numpy from my Arch Linux repo and installed the numpy through
sudo -H pip2 install numpy
Yet the problem persist. I have checked the folder $PACKAGE-SITE/numpy/core, multiarray.so is in it. And I have no idea why pickle can't load the module.
How can I solve the problem? What else do I need to do?
PS1. I am using Arch Linux. And tried all versions of python 2.7 since last year October. None of them works.
PS2. Since the problem is with the loading step. I suspect the problem being more likely from internal conflicts of python rather than from the data file.
Thanks to #MikeMcKems, the problem is now solved.
The issue is caused by different special symbols used MS Windows and Linux(eg. end of line symbol). My collaborator was using Windows machine, and saved the data with
pickle.dump(obj, 'filename', 'w')
The data was saved in plain text with a lot of special symbols in it. And when I load the data with my Linux machine, the symbols were misintepreted hence causing the problem.
The easiest way to solve it is to find a Windows machine, load the data with
a=pickle.load(open('filename_in', 'r'))
Then output with binary form
pickle.dump(a, open('filename_out', 'wb'))
Since binary data is universally recognized as long as you use pickle to read it, the file filename_out is easily recognizable by Python in linux.

Issue starting out with xlwings - AttributeError: Excel.Application.Workbooks

I was trying to use the package xlwings and ran into a simple error right from the start. I was able to run the example files they provided here without any major issues (except for multiple Excel books opening up upon running the code) but as soon as I tried to execute code via IPython I got the error AttributeError: Excel.Application.Workbooks. Specifically I ran:
from xlwings import Workbook, Sheet, Range, Chart
wb = Workbook()
Range('A1').value = 'Foo 1'
and got
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-7-7436ba97d05d> in <module>()
1 from xlwings import Workbook, Sheet, Range, Chart
----> 2 wb = Workbook()
3 Range('A1').value = 'Foo 1'
PATH\xlwings\main.pyc in __init__(self, fullname, xl_workbook, app_visible)
139 else:
140 # Open Excel if necessary and create a new workbook
--> 141 self.xl_app, self.xl_workbook = xlplatform.new_workbook()
142
143 self.name = xlplatform.get_workbook_name(self.xl_workbook)
PATH\xlwings\_xlwindows.pyc in new_workbook()
103 def new_workbook():
104 xl_app = _get_latest_app()
--> 105 xl_workbook = xl_app.Workbooks.Add()
106 return xl_app, xl_workbook
107
PATH\win32com\client\dynamic.pyc in __getattr__(self, attr)
520
521 # no where else to look.
--> 522 raise AttributeError("%s.%s" % (self._username_, attr))
523
524 def __setattr__(self, attr, value):
AttributeError: Excel.Application.Workbooks
I noticed the examples have a .xlxm file already present in the folder with the python code. Does the python code only ever work if it's in the same location as an existing Excel file? Does this mean it can't create Excel files automatically? Apologies if this is basic.