I want to try using persistent solver for an algorithm that iteratively adds new constraints to the problem, and want to avoid having to completely rebuild the file given to the solver before each iterations.
Before using persistent solver as described on https://pyomo.readthedocs.io/en/stable/solvers/persistent_solvers.html, I used a ConstraintList object to iteratively add my new constraints without having to name them individually. I thought this was a very elegant solution and I want to see if there is a way to notify the persistent solver when a new constraint is added to the ConstraintList.
In the docs, it is writtent that
m.c2 = pe.Constraint(expr=m.y >= m.x)
opt.add_constraint(m.c2)
where m.c2is a constraint to be added to the model with persistent solver. What would be the equivalent line to notify the persistent solver that a change was done to the ConstraintList, once that a constraint was added in it?
Here is how you create your constraint list
m.Cut_Defn = pyomo.ConstraintList(noruleinit=True)
And then you can add constraints in your constraint list:
m.Cut_Defn.add(some_number >= your_variable + some_other_number)
If you solve before the .add() then you'll find another solution than solving after the .add(). So you can think like: it implements the new constraints on the fly, and you have to resolve your model, if you want that constraints to be in your optimization.
Related
Given a pyscipopt.Model and its Solution, how to pass it to another model as a primal heuristic?
I'm currently writing the solution to a file via writeSol(), and then calling readSolFile() and addSol(). There should probably be a cleaner way.
This depends a bit on the structure of your two models. If they have the same variables in the same order (which is likely from what you wrote), then you can simply create a new solution in your model and copy all the values, ie:
variables = othermodel.getVars()
newvariables = model.getVars()
nvars = othermodel.getNVars()
newsol = self.model.createSol(self)
for i in range(nvars):
newsol[newvariables[i]] = othermodel.getSolVal(oldsol, variables[i])
self.model.trySol(newsol)
Let me know if this work/ doens't work
I am creating a model in Pyomo and I would like to create a binary variable x(i,j) representing links between nodes i and nodes j.
The problem is that not all nodes i are connected to j. Given an already known list of existing links (i,j), I would like to introduce a condition of existence when defining such a variable.
I was wondering if it is possible to initialize the variable from the list or if it is possible to add x[i,j] with a for loop based on an if statement.
origin_nodes=[A,B]
dest_nodes=[1,2,3]
list_of_links=[(A,1),(A,2),(A,3),(B,2)]
model.I=Set(initialize=origin_nodes)
model.J=Set(initialize=dest_nodes)
model.X=Var(model.I,model.J, within=Binary)
I'm not sure I understand the question, but see if this documentation is helpful:
https://pyomo.readthedocs.io/en/stable/pyomo_modeling_components/Sets.html#sparse-index-sets
I'm trying to add a DynamoDBVersionAttribute to incorporate optimistic locking when accessing/updating items in a DynamoDB table. However, I'm unable to figure out how exactly to add the version attribute.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBMapper.OptimisticLocking.html seems to state that using it as an annotation in the class that creates the table is the way to go. However, our codebase is creating new tables in a format similar to this:
AmazonDynamoDB client = AmazonDynamoDBClientBuilder.standard().build();
DynamoDB dynamoDB = new DynamoDB(client);
List<AttributeDefinition> attributeDefinitions= new
ArrayList<AttributeDefinition>();
attributeDefinitions.add(new
AttributeDefinition().withAttributeName("Id").withAttributeType("N"));
List<KeySchemaElement> keySchema = new ArrayList<KeySchemaElement>();
keySchema.add(new
KeySchemaElement().withAttributeName("Id").withKeyType(KeyType.HASH));
CreateTableRequest request = new CreateTableRequest()
.withTableName(tableName)
.withKeySchema(keySchema)
.withAttributeDefinitions(attributeDefinitions)
.withProvisionedThroughput(new ProvisionedThroughput()
.withReadCapacityUnits(5L)
.withWriteCapacityUnits(6L));
Table table = dynamoDB.createTable(request);
I'm not able to find out how to add the VersionAttribute through the Java code as described above. It's not an attribute definitions so unsure where it goes. Any guidance as to where I can add this VersionAttribute in the CreateTable request?
As far as I'm aware, the #DynamoDBVersionAttribute annotation for optimistic locking is only available for tables modeled specifically for DynamoDBMapper queries. Using DynamoDBMapper is not a terrible approach, since it effectively creates an ORM for CRUD operations on DynamoDB items.
But if your existing codebase can't make use of it, your next best bet is probably to use conditional writes to increment a version number if it's equal to what you expect it to be (i.e. roll your own optimistic locking). Unfortunately, you would need to include the increment / condition to every write you want to be optimistically locked.
Your code just creates a table, but then in order to use DynamoDBMapper to access that table, you need to create a class that represents it. For example if your table is called Users, you should create a class called Users, and use annotations to link it to the table.
You can keep your table creation code, but you need to create the DynamoDBMapper class. You can then do all of your loading, saving and querying using the DynamoDBMapper class.
When you have created the class, just give it a field called version and put the annotation on it, DynamoDBMapper will take care of the rest.
I want to create and object, save it to DB, then check if there is another row on the DB with the same token with execution_time=0. If there is, I want to delete the object created then restart the process.
transfer = Transfer(token = generateToken(size=9))
transfer.save()
while (len(Transfer.objects.filter(token=transfer.token, execution_time=0))!=1):
transfer.delete()
transfer = Transfer(token = generateToken(size=9))
transfer.save()
Do I need to commit the transaction between every loop? For example calling commit() at the end of every loop?
while (len(Transfer.objects.filter(token=transfer.token, execution_time=0))!=1):
transfer.delete()
transfer = Transfer(token = generateToken(size=9))
transfer.save()
commit()
#transaction.commit_manually
def commit():
transaction.commit()
From what you've described I don't think you need to use transactions. You're basically recreating a transaction rollback manually with your code.
I think the best way to handle this would be to have a database constraint enforce the issue. Is it the case that token and execution_time should be unique together? In that case you can define the constraint in Django with unique_together. If the constraint is that token should be unique whenever execution_time is 0, some databases will let you define a constraint like that as well.
If the constraint were in the database you could just do a get_or_create() in a loop until the Transfer was created.
If you can't define the constraint in the database for whatever reason then I think your version would work. (One improvement would be to use .count() instead of len.)
I want to create and object, save it to DB, then check if there is
another row on the DB with the same token with execution_time=0. If
there is, I want to delete the object created then restart the
process.
There are few ways you can approach this, depending on what your end goal is:
Do you want that no other record is written while you are writing yours (to prevent duplicates?) If so, you need to get a lock on your table, and to do that, you need to execute an atomic transaction, with #transaction.atomic (new in 1.6)
If you want to make sure that no duplicate records are created given a combination of fields, you need to enforce this at the database level with unique_together
I believe combining the above two will solve your problem; however, if you want a more brute force approach; you can override the save() method for your object, and then raise an appropriate exception when a record is trying to be created (or updated) that violates your constraints.
In your view, you would then catch this exception and then take the appropriate action.
Looking at the Django source, I see that on assignment to a ManyToManyField, all existing values are removed before the new values are added:
https://github.com/django/django/blob/master/django/db/models/fields/related.py#L776
That naive algorithm will introduce tremendous database churn when my application runs—it updates hundreds of thousands of such relationships at a time.
Since, in my case, most of these updates will be noops (i.e., the current and new values will be identical), I could easily reduce churn by updating the M2M field with an algorithm where I first check which objects need to be added, and then check which need to be removed.
This seems like such a common pattern that I'm wondering if a reusable function to do this already exists?