I wan't to set initial primal and dual values in a program's variables. Is there a specific way to do this. I can see there is a initialize option in the Var object but i'm not sure how to use it in this manner
If you want to set the value of a variable when you declare it, you can use the initialize keyword. E.g.,
model.x = Var(initialize=1.0)
Alternatively, you can set the .value attribute on a variable anytime before the solve. If you are starting with an AbstractModel be sure to only do this on the instance that gets returned by the create_instance method. Here is an example using a ConcreteModel:
model = ConcreteModel()
model.x = Var()
model.X = Var([1,2,3])
model.x.value = 5.0
model.X[1].value = 1.0
The NL file interface will always include the current value of all model variables in the solver input file. For other interfaces (e.g., the LP file interface), adding the keyword warmstart=True to the solve method will create a warmstart file that includes values of any binary or integer variables for a MIP warmstart.
To set a dual solution, you must declare a Suffix on your model with the name dual. Note that the only interface that currently supports exporting suffix information is the NL file interface (solvers that work with AMPL). However, most interfaces support importing suffix information from the solver (dual especially). Setting the dual value of a particular constraint might look like:
model = ConcreteModel()
model.dual = Suffix(direction=Suffix.IMPORT_EXPORT)
model.c = Constraint(...)
model.dual[model.c] = 1.0
More information about the Suffix component can be found in the online documentation for Pyomo.
Related
In my Django application, I am using bulk_create(). For one of the fields in a target model I have assigned a set of validators to restrict the allowed value to uppercase letters (alphabets) and to a fixed length of "3", as shown below:
class Plant(models.Model):
plant = models.CharField(primary_key=True, max_length=4, ...
plant_name = models.CharField(max_length=75, ...
plant_short_name = models.CharField(max_length=3, validators=[...
# rest of the fields ...
I am restricting field plant_short_name to something like CHT for say, Plant Charlotte.
Using the source file (.csv) I am able to successfully create new instances using bulk_create, however I find that the data get saved even with field plant_short_name's value being different.
For example, if I use the source as:
plant,plant_name,plant_short_name
9999,XYZ Plant,XY
the new instance still gets created although the length of (string) value of field plant_short_name is only 2 (instead of 3 as defined in the validators).
If I am to use an online create function (say, Django CreateView), the validators work as expected.
How do I control / rstrict the creation of model instance when a field value of incorrect length is used in the source file?
bulk_create():
This method inserts the provided list of objects into the database in
an efficient manner (generally only 1 query, no matter how many
objects there are). Also, does not call save() on each of the
instances, do not send any pre/post_save signals.
By efficient manner it means there is no validation. You can explore more of the function code in django/models/db/query.py inside the environment.
I have a scenario that, i want a greatest value with the field name. I can get greatest value using Greatest db function which django provides. but i am not able to get its field name. for example:
emps = Employee.objects.annotate(my_max_value=Greatest('date_time_field_1', 'date_time_field_1'))
for e in emps:
print(e.my_max_value)
here i will get the value using e.my_max_value but i am unable to find out the field name of that value
You have to annotate a Conditional Expression using Case() and When().
from django.db.models import F, Case, When
emps = Employee.objects.annotate(
greatest_field=Case(
When(datetime_field_1__gt=F("datetime_field_2"),
then="datetime_field_1"),
When(datetime_field_2__gt=F("datetime_field_1"),
then="datetime_field_2"),
default="equal",
)
)
for e in emps:
print(e.greatest_field)
If you want the database query to tell you which of the fields was larger, you'll need to add another annotated column, using case/when logic to return one field name or the other. (See https://docs.djangoproject.com/en/4.0/ref/models/conditional-expressions/#when)
Unless you're really trying to offload work onto the database, it'll be much simpler to do the comparison work in Python.
I have below model in my Django app.
class Revenue(models.Model):
from_a = models.IntegerField()
from_b = models.IntegerField()
def get_total(self):
return self.from_a + self.from_b
Now I am retrieving data using Revenue.objects.filter(from_a__gt = 10).values('from_a', 'from_b').
From the above queryset I am getting values, now I want to call get_total function on objects.
I didn't found a way to call that function.
Is there a way to retrieve the data only I needed using values and also can call member_functions of that objects?
Revenue.objects.filter(from_a__gt = 10) should not be the solution if I have 100s of columns for my model.
Instead of doing this on model level you can use F expression with your query:
from django.db.models import F
Revenue.objects.filter(from_a__gt = 10).annotate(
get_total=F('from_a') + F('from_b')
).values('from_a', 'from_b', 'get_total')
From the above queryset I am getting values, now I want to call get_total function on objects. I didn't found a way to call that function.
Well you obtain a QuerySet (that is at that point not evaluated), so a collection of Revenues. You can not directly call the function on that collection. But you can iterate through the queryset, and call the function on the inidvidual objects. We can for example make a list with:
[r.get_total() for r in Revenue.objects.filter(from_a__gt = 10)]
Is there a way to retrieve the data only I needed using values and also can call member_functions of that objects?
Yes, you can use the .only(..) element on the query, to restrict the number of columns that are loaded:
[r.get_total() for r in Revenue.objects.filter(from_a__gt = 10).only('from_a', 'from_b')]
This will construct Revenue objects, but we will only load the specified columns. In that case we will load only from_a and from_b, and if you later need other fields, these will be loaded with extra queries.
In case however the logic in the member_functions is easy, you better use annotations: these are then processed in the database, and thus allow filtering. This is however not always possible: Python allows to calculate very complicated things that would result in a gigantic equivalent SQL expression. Furthermore most databases do not allow to contact webservices and file systems, so some functions are fundamentally impossible to translate in an annotation.
Depends in what you want to do... if you want get_total shows as field, you can make this method one property method, he will be calculated field, so its not persist in data base
Property Method
#property
def get_total(self):
return self.from_a + self.from_b
Revenue.objects.filter(from_a__gt = 10).values('from_a', 'from_b', 'get_total')
https://docs.djangoproject.com/en/2.0/topics/db/models/#model-methods
You can call it like regular class method, so when you get the model instance you can call it
revenues = Revenue.objects.filter(from_a__gt = 10).values('from_a', 'from_b')
for revenue in revenues:
print(revenue.get_total)
# OR
[r.get_total for r in Revenue.objects.filter(from_a__gt = 10)] # Like Wilem Van said
Static Method
Using it as static method (this way isnt the best fit for your need but is nice to know), you will bind this method with the class, so you dont need one object instance to manage it, just the class reference
class Revenue(...):
...
#staticmethod
def get_total(valueA, valueB):
return valueA + valueB
To call it is like that
Revenue.get_total(5, 10)
https://www.programiz.com/python-programming/methods/built-in/staticmethod
Obs.: Weird, didnt got one good referente to staticmethod at django docs
I have a set of dynamic database tables (Postgres 9.3 with PostGIS) that I am mapping using a python metaclass:
cls = type(str(tablename), (db.Model,), {'__tablename__':tablename})
where, db.Model is the db object via flask-sqlalchemy and tablename is a bit of unicode.
The cls is then added to an application wide dictionary current_app.class_references (using Flask's current_app) to avoid attempts to instantiate the class multiple times.
Each table contains a geometry column, wkb_geometry stored in Well Known Binary. I want to map these to use geoalchemy2 with the final goal of retrieving GeoJSON.
If I was declaring the table a priori, I would use:
class GeoPoly():
__tablename__ = 'somename'
wkb_geometry = db.Column(Geometry("POLYGON"))
#more columns...
Since I am trying to do this dynamically, I need to be able to override the reflection of cls1 with the known type.
Attempts:
Define the column explicitly, using the reflection override syntax.
cls = type(str(tablename), (db.Model,), {'__tablename__':tablename,
'wkb_geometry':db.Column(Geometry("POLYGON"))})
which returns the following on a fresh restart, i.e. the class has not yet been instantiated:
InvalidRequestError: Table 'tablename' is already defined for this MetaData instance. Specify 'extend_existing=True' to redefine options and columns on an existing Table object
Use mixins with the class defined above (sans tablename):
cls = type(str(tablename), (GeoPoly, db.Model), {'__tablename__':tablename})
Again MetaData issues.
Override the column definition attribute after the class is instantiated:
cls = type(str(tablename), (db.Model,), {'__tablename__':tablename})
current_app.class_references[tablename] = cls
cls.wkb_geometry = db.Column(Geometry("POLYGON"))
Which results in:
InvalidRequestError: Implicitly combining column tablename.wkb_geometry with column tablename.wkb_geometry under attribute 'wkb_geometry'. Please configure one or more attributes for these same-named columns explicitly.
Is it possible to use the metadata construction to support dynamic reflection **and* *override a column known will be available on all tables?
I'm not sure if I exactly follow what you're doing, but I've overridden reflected columns in the past inside my own __init__ method on a custom metaclass that inherits from DeclarativeMeta. Any time the new base class is used, it checks for a 'wkb_geometry' column name, and replaces it with (a copy of) the one you created.
import sqlalchemy as sa
from sqlalchemy.ext.declarative import DeclarativeMeta, declarative_base
wkb_geometry = db.Column(Geometry("POLYGON"))
class MyMeta(DeclarativeMeta):
def __init__(cls, clsname, parents, dct):
for key, val in dct.iteritems():
if isinstance(sa.Column) and key is 'wkb_geometry':
dct[key] = wkb_geometry.copy()
MyBase = declarative_base(metaclass=MyMeta)
cls = type(str(tablename), (MyBase,), {'__tablename__':tablename})
This may not exactly work for you, but it's an idea. You probably need to add db.Model to the MyBase tuple, for example.
This is what I use to customize a particular column while relying on autoload for everything else. The code below assumes an existing declarative Base object for a table named my_table. It loads the metadata for all columns but overrides the definition of a column named polygon:
class MyTable(Base):
__tablename__ = 'my_table'
__table_args__ = (Column(name='polygon', type=Geometry("POLYGON"),
{'autoload':True})
Other arguments to the Table constructor can be provided in the dictionary. Note that the dictionary must appear last in the list!
The SQLAlchemy documentation Using a Hybrid Approach with __table__ provides more details and examples.
I am very new to Django and I'm creating a simple app to track disk usage and show changes over time, etc.:
class Directory(models.Model):
path = models.CharField(max_length=200)
class Directory_Size(models.Model):
drivedir = models.ForeignKey(Directory, related_name = 'sizes')
measure_date = models.DateTimeField()
size = models.IntegerField()
Directory_Sizes stores the size of the directory and the time it was recorded. There will be many of these for each Directory.
How do I select the current size of the directory? I need the newest Directory_Size for each Directory.
How would I select the top 10 directories based on size, this would be a simple order by size and then limit to 10, can this be done by chaining the order_by and limit onto the top of the query above?
Should I change the models to make this type of thing easier?
I'm assuming this is simple and I don't know how because of my lack of Django knowledge.
This isn't related to your questions, but Django naming standards would tell you to name the model DirectorySize, not Directory_Size. You use either CamelCase or lowercase with underscores, not both (it's an xor). In general, classes (and therefore, models) are named using CamelCase. Function definitions and variables are lowercase with underscores.
source: https://docs.djangoproject.com/en/dev/internals/contributing/writing-code/coding-style/
1) Now my question is how do I select the current size of the directory. I need the newest Directory_Size for each directory.
d = Directory.objects.get( ...
d_size = d.sirectory_size_set.order_by( '-measure_date')[0].size
2) select the top 10 directories based on size: I think you need a custom raw query.
for p in Directory.objects.raw(
'SELECT *, (select ...) as s FROM myapp_directory order by s LIMIT 10'
)
Where (select ...) is a subquery to get actual size, also you can do a join ...
sizes will return a manager on which you can perform filtering operations. So this:
last_10_sizes = the_directory.sizes.order_by('-measure_date')[:10]
will return the latest 10 sizes.
Directory_Size.objects.all().order_by('-size')[:10]
However that is pretty crude and all objects.
You can filter on a mesure date as well and give a range, something like
import datetime
start_date = datetime.datetime(<values_here>)
end_date = datetime.datetime(<values_here>)
Directory_Size.objects.filter(measure_date__gte=start_date, measure_date__lte=end_date).order_by('-size')[:10]
do you need historical data, why not update the same dir and make it modified date with a default of of datetime.datetime.utc_now()?