How to access objective function value in pyomo? - pyomo

I am trying to output the objective value from my pyomo model. I did access to the variable values but I cannot get access to the objective function value. My codes are:
instance = model.create_instance(data)
opt = SolverFactory('cplex')
results = opt.solve(instance)
instance.solutions.store_to(results)
results.write()
# instance.display()
# output the solution
var_val = []
for v in instance.component_data_objects(Var):
var_val.append(int(v.value))
obj_val = value(instance.obj)
And the last line gives error info:
obj_val = value(instance.obj)
NameError: name 'value' is not defined
But I can clearly see the value from result.write() :
Message: None
Objective:
obj:
Value: 104728.80233047833
Variable:
x[0,1]:
Value: 1569
x[1,0]:
Value: 1569
x[1,1]:
Value: 206
x[2,2]:
Value: 230
x[2,3]:
Value: 213
x[3,2]:
Value: 213

How are you importing Pyomo? If you're using from pyomo.environ import * the value function will be included. If you're importing each thing you're using individually then you just need to make sure you import the value function: from pyomo.environ import value

Related

Invoke endpoint error - detectron2 on AWS Sagemaker: ValueError: Type [application/x-npy] not support this type yet

I have been following this guide for implementing a Detectron2 model on Sagemaker.
It all looks good, both on the training and the batch transform side.
However, I tried to tweak a bit the code to create an Endpoint that can be invoked by sending a payload, and I am having some troubles with it.
At the end of this notebook, after creating the SageMaker model object:
model = PyTorchModel(
name="d2-sku110k-model",
model_data=training_job_artifact,
role=role,
sagemaker_session=sm_session,
entry_point="predict_sku110k.py",
source_dir="container_serving",
image_uri=serve_image_uri,
framework_version="1.6.0",
code_location=f"s3://{bucket}/{prefix_code}",
)
I added the following code:
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m5.xlarge')
And I can see that the model has been successfully deployed.
However, when I try to predict an image with :
predictor.predict(input)
I get the following error:
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (500) from primary with message "Type [application/x-npy] not support this type yet
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/sagemaker_inference/transformer.py", line 126, in transform
result = self._transform_fn(self._model, input_data, content_type, accept)
File "/opt/conda/lib/python3.6/site-packages/sagemaker_inference/transformer.py", line 215, in _default_transform_fn
data = self._input_fn(input_data, content_type)
File "/opt/ml/model/code/predict_sku110k.py", line 98, in input_fn
raise ValueError(err_msg)
ValueError: Type [application/x-npy] not support this type yet
I tried a bunch of different input types: a image byte-encoded (created with cv2.imencode('.jpg', cv_img)[1].tobytes()), a numpy array, a BytesIO object (created with io module), a dictionary of the form {'input': image} where image is any of the previous (this is because this format was used by a tensorflow endpoint I created some time ago).
As I think it might be relevant, I also copy paste here the Inference script used as entry point:
"""Code used for sagemaker batch transform jobs"""
from typing import BinaryIO, Mapping
import json
import logging
import sys
from pathlib import Path
import numpy as np
import cv2
import torch
from detectron2.engine import DefaultPredictor
from detectron2.config import CfgNode
##############
# Macros
##############
LOGGER = logging.Logger("InferenceScript", level=logging.INFO)
HANDLER = logging.StreamHandler(sys.stdout)
HANDLER.setFormatter(logging.Formatter("%(levelname)s | %(name)s | %(message)s"))
LOGGER.addHandler(HANDLER)
##########
# Deploy
##########
def _load_from_bytearray(request_body: BinaryIO) -> np.ndarray:
npimg = np.frombuffer(request_body, np.uint8)
return cv2.imdecode(npimg, cv2.IMREAD_COLOR)
def model_fn(model_dir: str) -> DefaultPredictor:
r"""Load trained model
Parameters
----------
model_dir : str
S3 location of the model directory
Returns
-------
DefaultPredictor
PyTorch model created by using Detectron2 API
"""
path_cfg, path_model = None, None
for p_file in Path(model_dir).iterdir():
if p_file.suffix == ".json":
path_cfg = p_file
if p_file.suffix == ".pth":
path_model = p_file
LOGGER.info(f"Using configuration specified in {path_cfg}")
LOGGER.info(f"Using model saved at {path_model}")
if path_model is None:
err_msg = "Missing model PTH file"
LOGGER.error(err_msg)
raise RuntimeError(err_msg)
if path_cfg is None:
err_msg = "Missing configuration JSON file"
LOGGER.error(err_msg)
raise RuntimeError(err_msg)
with open(str(path_cfg)) as fid:
cfg = CfgNode(json.load(fid))
cfg.MODEL.WEIGHTS = str(path_model)
cfg.MODEL.DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
return DefaultPredictor(cfg)
def input_fn(request_body: BinaryIO, request_content_type: str) -> np.ndarray:
r"""Parse input data
Parameters
----------
request_body : BinaryIO
encoded input image
request_content_type : str
type of content
Returns
-------
np.ndarray
input image
Raises
------
ValueError
ValueError if the content type is not `application/x-image`
"""
if request_content_type == "application/x-image":
np_image = _load_from_bytearray(request_body)
else:
err_msg = f"Type [{request_content_type}] not support this type yet"
LOGGER.error(err_msg)
raise ValueError(err_msg)
return np_image
def predict_fn(input_object: np.ndarray, predictor: DefaultPredictor) -> Mapping:
r"""Run Detectron2 prediction
Parameters
----------
input_object : np.ndarray
input image
predictor : DefaultPredictor
Detectron2 default predictor (see Detectron2 documentation for details)
Returns
-------
Mapping
a dictionary that contains: the image shape (`image_height`, `image_width`), the predicted
bounding boxes in format x1y1x2y2 (`pred_boxes`), the confidence scores (`scores`) and the
labels associated with the bounding boxes (`pred_boxes`)
"""
LOGGER.info(f"Prediction on image of shape {input_object.shape}")
outputs = predictor(input_object)
fmt_out = {
"image_height": input_object.shape[0],
"image_width": input_object.shape[1],
"pred_boxes": outputs["instances"].pred_boxes.tensor.tolist(),
"scores": outputs["instances"].scores.tolist(),
"pred_classes": outputs["instances"].pred_classes.tolist(),
}
LOGGER.info(f"Number of detected boxes: {len(fmt_out['pred_boxes'])}")
return fmt_out
# pylint: disable=unused-argument
def output_fn(predictions, response_content_type):
r"""Serialize the prediction result into the desired response content type"""
return json.dumps(predictions)
Can anyone point out what is the correct format for invoking the model (or how to tweak the code to use the endpoint)? I am thinking to change the request_content_type to 'application/json', but I am not sure that it will help much.
Edit: I tried a solution inspired by this SO thread but it did not work for my case.
It's been a while since you asked this so I hope you found a solution already, but for people seeing this in the future ...
The error appears to be because you are sending the request with the default content_type (no specified a content type in the request, neither specified a serialiser), but your code is made in a way that will only respond to requests that come with content type "application/x-image"
The default content-type is "application/json"
You have 2 options here, you either amend your code to be able to handle "application/json" content type, or when you invoke the endpoint, you add a content-type header with the right value. You could do this by changing the predict method as below:
instead of:
predictor.predict(input)
try:
predictor.predict(input, initial_args={"ContentType":"application/x-image"})

How is the None keyword argument converted into a int datatype?

Disclaimer: This is not my code. It originated from graphics.py created by Dr. John Zelle.
Where is rate declared as a number? I understand that it is a keyword argument of type None, but how is this pauselength = 1/rate-(now-_update_lasttime) valid? To my knowledge, this is saying that 1 is divisible by None.
import time, os, sys
try: # import as appropriate for 2.x vs. 3.x
import tkinter as tk
except:
import Tkinter as tk
# global variables and functions
_root = tk.Tk()
_root.withdraw()
_update_lasttime = time.time()
def update(rate=None):
global _update_lasttime
if rate:
now = time.time()
pauseLength = 1/rate-(now-_update_lasttime)
if pauseLength > 0:
time.sleep(pauseLength)
_update_lasttime = now + pauseLength
else:
_update_lasttime = now
_root.update()
A simple experiment:
a = None
b = 1
print(b/a)
Reveals the following error (Which makes complete sense):
TypeError: unsupported operand type(s) for /: 'int' and 'NoneType'
If rate is None, the conditional if rate: is false, so none of the code in the following block (including the problematic division) runs. The only code which runs in that situation is _root.update().

PyYAML shows "ScannerError: mapping values are not allowed here" in my unittest

I am trying to test a number of Python 2.7 classes using unittest.
Here is the exception:
ScannerError: mapping values are not allowed here
in "<unicode string>", line 3, column 32:
... file1_with_path: '../../testdata/concat1.csv'
Here is the example the error message relates to:
class TestConcatTransform(unittest.TestCase):
def setUp(self):
filename1 = os.path.dirname(os.path.realpath(__file__)) + '/../../testdata/concat1.pkl'
self.df1 = pd.read_pickle(filename1)
filename2 = os.path.dirname(os.path.realpath(__file__)) + '/../../testdata/concat2.pkl'
self.df2 = pd.read_pickle(filename2)
self.yamlconfig = u'''
--- !ConcatTransform
file1_with_path: '../../testdata/concat1.csv'
file2_with_path: '../../testdata/concat2.csv'
skip_header_lines: [0]
duplicates: ['%allcolumns']
outtype: 'dataframe'
client: 'testdata'
addcolumn: []
'''
self.testconcat = yaml.load(self.yamlconfig)
What is the the problem?
Something not clear to me is that the directory structure I have is:
app
app/etl
app/tests
The ConcatTransform is in app/etl/concattransform.py and TestConcatTransform is in app/tests. I import ConcatTransform into the TestConcatTransform unittest with this import:
from app.etl import concattransform
How does PyYAML associate that class with the one defined in yamlconfig?
A YAML document can start with a document start marker ---, but that has to be at the beginning of a line, and yours is indented eight positions on the second line of the input. That causes the --- to be interpreted as the beginning of a multi-line plain (i.e. non-quoted) scalar, and within such a scalar you cannot have a : (colon + space). You can only have : in quoted scalars. And if your document does not have a mapping or sequence at the root level, as yours doesn't, the whole document can only consists of a single scalar.
If you want to keep your sources nicely indented like you have now, I recommend you use dedent from textwrap.
The following runs without error:
import ruamel.yaml
from textwrap import dedent
yaml_config = dedent(u'''\
--- !ConcatTransform
file1_with_path: '../../testdata/concat1.csv'
file2_with_path: '../../testdata/concat2.csv'
skip_header_lines: [0]
duplicates: ['%allcolumns']
outtype: 'dataframe'
client: 'testdata'
addcolumn: []
''')
yaml = ruamel.yaml.YAML()
data = yaml.load(yaml_config)
You should get into the habit to put the backslash (\) at the end of your first triple-quotes, so your YAML document. If you do that, your error would have actually indicated line 2 because the document doesn't start with an empty line anymore.
During loading the YAML parser encouncters the tag !ConcatTransform. A constructor for an object is probably registered with the PyYAML loader, associating that tag with the using PyYAML's add_constructor, during the import.
Unfortunately they registered their constructor with the default, non-safe, loader, which is not necessary, they could have registered with the SafeLoader, and thereby not force users to risk problems with non-controlled input.

graph-tool - AttributeError: 'PropertyDict' object has no attribute 'species'

I have the following code to annotate a graph using property maps:
from graph_tool.all import *
# define graph
g = Graph()
g.set_directed(True)
species = g.new_vertex_property("string")
species_dict = {}
reaction_dict = {}
#add species and reactions
s1 = g.add_vertex()
species[s1] = 'limonene'
species_dict[g.vertex_index[s1]] = 'limonene'
g.vertex_properties["species"] = species
g.vp.species[s1]
When I run this I obtain the following error message:
File "/home/pmj27/projects/NOC/exergy/make_graph.py", line 45, in <module>
g.vp.species[s1]
AttributeError: 'PropertyDict' object has no attribute 'species'
Why is this? If I type g.vp into my IPython console I get {'species': <PropertyMap object with key type 'Vertex' and value type 'string', for Graph 0x7f285d90ea10, at 0x7f285d90ef90>} as answer, so there clearly is a property map.
The access to property maps via attributes (as g.vp.species[s1] in your example) is only available in more recent versions of graph-tool (currently 2.11, as of Nov 2015). In the version you are using (2.2.42), you must use the dictionary interface: g.vp["species"][s1].

django get_or_create return error: 'tuple' object has no attribute

I am new to django and I am trying to use get_or_create model function but I get an error even I have the attribute in my model
AttributeError at /professor/adicionar-compromisso
'tuple' object has no attribute 'dias'
Request Method: POST
Request URL: http://localhost:8000/professor/adicionar-compromisso
Django Version: 1.4.1
Exception Type: AttributeError
Exception Value:
'tuple' object has no attribute 'dias'
Exception Location: c:\htdocs\rpv\GerenDisponibilidade\professor\models.py in inserirCompromisso, line 63
Python Executable: C:\Python27\python.exe
Python Version: 2.7.3
Python Path:
['c:\\htdocs\\rpv\\GerenDisponibilidade',
'C:\\Python27\\lib\\site-packages\\distribute-0.6.27-py2.7.egg',
'C:\\Python27\\lib\\site-packages\\pip-1.1-py2.7.egg',
'C:\\Python27\\lib\\site-packages\\sphinx-1.1.3-py2.7.egg',
'C:\\Python27\\lib\\site-packages\\docutils-0.9.1-py2.7.egg',
'C:\\Python27\\lib\\site-packages\\jinja2-2.6-py2.7.egg',
'C:\\Python27\\lib\\site-packages\\pygments-1.5-py2.7.egg',
'C:\\Windows\\system32\\python27.zip',
'C:\\Python27\\DLLs',
'C:\\Python27\\lib',
'C:\\Python27\\lib\\plat-win',
'C:\\Python27\\lib\\lib-tk',
'C:\\Python27',
'C:\\Python27\\lib\\site-packages',
'C:\\Python27\\lib\\site-packages\\setuptools-0.6c11-py2.7.egg-info']
Server time: Seg, 3 Set 2012 17:57:17 -0300
Model
class DiaSemana(models.Model):
DIAS_CHOICES = (
("Seg", "Segunda-Feira"),
("Ter", "Terça-Feira"),
("Qua", "Quarta-Feira"),
("Qui", "Quinta-Feira"),
("Sex", "Sexta-Feira"),
("Sab", "Sábado"),
("Dom", "Domingo"),
)
dias = models.CharField(max_length=20, choices=DIAS_CHOICES)
Here I am trying to search to check if there is existing value, otherwise create new and save
for diaSemana in diaSemanas:
d = DiaSemana.objects.get_or_create(dias=diaSemana)
d.dias = diaSemana;
d.save()
c.save()
c.diaSemana.add(d);
What's wrong?
get_or_create does not just return the object:
Returns a tuple of (object, created), where object is the retrieved or created object and created is a boolean specifying whether a new object was created.
In your case d has been assigned this tuple instead of the object you expected, so you get the attribute error. You can fix your code by changing it to:
d, created = DiaSemana.objects.get_or_create(dias=diaSemana)
The following two lines look unnecessary to me. The get_or_create call above ensures that d.dias=diaSemana, so there's no need to assign it again. There's probably no need to call save either.
d.dias = diaSemana;
d.save()
instead of this:
dias = models.CharField(max_length=20, choices=DIAS_CHOICES)
do:
dias = models.CharField(max_length=20, choices=DIAS_CHOICES)[0]
as #Alasdair said, the first one in the tuple is the object
Documentation clearly says that get_or_create returns tuple (object, created) - and this is exactly error you are seeing.
https://docs.djangoproject.com/en/dev/ref/models/querysets/#get-or-create