We had a column type for a enum called enumFooType which we had added on \Doctrine\DBal\Types\Type::addType().
When running vendor/bin/doctrine-module migrations:diff to generate the migration that would delete said column, an error was thrown:
[Doctrine\DBAL\DBALException]
Unknown column type "enumFooType" requested. Any Doctrine type that you use has to be registered with \Doctrine\DBAL\Types\Type::addType().
You can get a list of all the known types with \Doctrine\DBAL\Types\Type::getTypesMap().
If this error occurs during database introspection then you might have forgot to register all database types for a Doctrine Type.
Use AbstractPlatform#registerDoctrineTypeMapping() or have your custom types implement Type#getMappedDatabaseTypes().
If the type name is empty you might have a problem with the cache or forgot some mapping information.
I'm guessing the error was thrown because the database has a foo_type marked with (DC2Type:enumFooType).
What is the correct way of handling these types of deletions? My first thought would be to generate a blank migration using vendor/bin/doctrine-module migrations:generate and manually write the query, but I'd like a more automated way, if possible not writing anything manually.
TL;DR:
The class definition for the DBAL type enumFooType should exist before running the doctrine commands (now that I have written this line, it feels kind of obvious, like "duh!").
Long answer:
After a couple of rollbacks and trial and errors, I devised the following procedure for this kind of operations:
Delete the property of enumFooType from the entity class.
Create the migration (up to this point, the EnumFooType file still exists).
Delete the EnumFooType class that contains the definition of this dbal type.
The reason it has to be done in this order is because if you delete the type first, doctrine won't be load because this file is missing, resulting in the exception posted in the original question.
Moreover, after you have created the migration, and then deleted the type; If you ever need to rollback that change, you have to:
Restore to the previous commit, so that the EnumFooType exist and the property of type enumFooType is defined in the entity class.
Run the migration command to roll back.
Related
I want to perform a non-backwards compatible state upgrade using SignatureConstraint. If it were a backwards compatible change, for example adding a property, I'd just added a nullable property in the state and that would work. However I don't have any idea how should I act in the following scenarios:
Scenario1: A new non-null field is added to the state
Scenario2: A field was removed from state
Scenario3: A field was modified in the state. E.g. a field of type Date transformed into an object which contains that date and some other fields.
Scenario4: A field in the state was renamed.
The problem is that explicit upgrade does not support SignatureConstraint and I get the following error message Legacy contract does not satisfy the upgraded contract's constraint, so I need to find a solution for implicity upgrade.
ContractUpgradeFlow doesn't support the upgrade of state with SignatureConstraint. However, the flexibility of Signature Constraint allows you to add any CorDapps as long as it's signed by the same key. You could easily write a simple flow to mimic an ExplicitUpgrade for the scenario you mentioned.
Here is what you could do:
Add both the corDapps jar files (old and updated) in the nodes cordapps folder.
Write another cordapp with a flow that consumes your existing state and outputs the new state (the upgraded one).
Add this flow jar to the nodes cordapps folder.
Execute the new flow to consume the older states and output the upgraded state.
Points to Note:
Make sure to have the correct set of signers to avoid incorrect spending of the states.
This is just an overall idea. The actual way of doing this might get a little complicated depending on the contract rules for your Exit transaction of the state.
I would rather add a new upgrade command to cater to this scenario.
You could have got the overall idea and do the tweaking at your end to perform the upgrade of your usecase. Hope this helps!
As a workaround I made the incompatible change to become a compatible one. Here is how it works.
I've created a state which has propertiesV1 object. This object includes all the fields that CompanyState should.
#CordaSerializable
#BelongsToContract(CompanyContract::class)
data class CompanyState(
override val linearId: UniqueIdentifier,
val propertiesV1: CompanyV1?
) : LinearState
Now when I need to make an incompatible change in the properties, I just add another version of the object to the state.
#CordaSerializable
#BelongsToContract(CompanyContract::class)
data class CompanyState(
override val linearId: UniqueIdentifier,
val propertiesV1: CompanyV1?,
val propertiesV2: CompanyV2?
) : LinearState
Neither contract, nor flows are changed. They are just being updated to handle propertiesV2 field.
I am working on some Django/Python code.
Basically, the backend of my code gets sent a dict of parameters named 'p'. These values all come off Django models.
When I tried to override them as such:
p['age']=25
I got a 'model error'. Yet, if I write:
p.age=25
it works fine.
My suspicion is that, internally, choice #1 tries to set a new value to an instance of a class created by Django that objects to being overridden, but internally Python3 simply replaces the Django instance with a "new" attribute of the same name ('age'), without regard for the prior origin, type, or class of what Django created.
All of this is in a RESTful framework, and actually in test code. So even if I am right I don't believe it changes anything for me in reality.
But can anyone explain why one type of assignment to an existing dict works, and the other fails?
p is a class, not a dict. Django built it that way.
But, as such, one approach (p.age) lets you change an attribute of the object in the class.
I am using OpenJPA (JPA 1.0) on WebLogic 10.0.x with Oracle. I have defined a OneToMany relationship as below:
#Entity
public class Compound implements Serializable {
...
#OneToMany(mappedBy="compound", fetch=FetchType.LAZY, cascade=CascadeType.ALL)
private List<Submission> submissions = new ArrayList<Submission>();
...
}
#Entity
public class Submission implements Serializable {
...
#ManyToOne(fetch=FetchType.LAZY, cascade=CascadeType.REFRESH)
#JoinColumn(name="compoundId")
private Compound compound;
...
}
When I delete a Compound entity all child Submission entities should be deleted also. This works as a general rule, except that I have a foreign key constraint setup on these tables:
ALTER TABLE SUBMISSION
ADD CONSTRAINT FK_SUBMISSION_COMPOUND
FOREIGN KEY (COMPOUNDID)
REFERENCES COMPOUND(COMPOUNDID);
Now when I attempt to delete the Compound entity I encounter the following exception:
ORA-02292: integrity constraint (HELC.FK_SUBMISSION_COMPOUND) violated - child record found {prepstmnt 3740 DELETE FROM Compound WHERE compoundId = ? [params=(long) 10384]} [code=2292, state=23000]"
The above exception implies that Open JPA is attempting to delete the parent prior to cascading the delete onto the child entities. I've read a few articles via Google about this exception, dating back to 2006. However, the most recent article suggests that this bug has been fixed?
http://mail-archives.apache.org/mod_mbox/openjpa-dev/200609.mbox/%3C14156901.1158019042738.JavaMail.jira#brutus%3E
https://issues.apache.org/jira/browse/OPENJPA-235
Can anyone suggest why this is not working and what I can do about it? I am loathe to manually delete the child entities, especially as this is one of the less-complicated relationships in my schema and whatever solution I use for this I will need to apply elsewhere.
Thanks
Jay
When I delete a Compound entity all child Submission entities should
be deleted also. This works as a general rule, except that I have a
foreign key constraint setup on these tables:
If you can change the foreign key constraint, that should solve the problem as far as the database is concerned. I'm not sure how OpenJPA will behave here.
ALTER TABLE SUBMISSION
ADD CONSTRAINT FK_SUBMISSION_COMPOUND
FOREIGN KEY (COMPOUNDID)
REFERENCES COMPOUND(COMPOUNDID)
ON DELETE CASCADE;
One thing - as discussed above this is Weblogic 10.0.x. I suspect we
are using the bundled version of OpenJPA / Kodo, which is probably
quite old...
My own feeling is that the bug you referred to should have been fixed by this version, but it's also a) close enough in time that it might not have been fixed, and b) potentially a big enough problem that I think you should spend some time verifying the version and fix. (Actually, I just noticed that OpenJPA 1.0 was released on Aug 2007. That's a lot earlier than I thought, which makes it more likely you don't have the bug fix.)
If you can't modify the database (because it's a legacy system that clearly doesn't intend for clients to rely on cascading deletes), and if the bug isn't fixed in your version, you'll have to manage the order of SQL statements yourself.
The burden of manually managing SQL statements--which is one of the things that OpenJPA is supposed to do for you--might be enough to get management to either upgrade OpenJPA or to update the foreign key constraints in the database.
I really hope you get a better answer than this one.
Using Django 1.3 with PostgreSQL 9.0, I have a multi-step object creation function/view, where:
The main object is created (have tried both MyModel.objects.create() and manually using object.save() methods) and,
Then m2m relationships are setup (they must follow the main object creation so that said object has an id to relate to).
Some of those relationships may fail, or some other problem may arise, thus I need the entire function to behave atomically.
I've tried wrapping the function with the transaction.commit_on_success decorator, as well as tried using commit_manually (and setting the commit point at the end of the function); but neither works. That is, the main object is created and saved in the database, even when an exception is raised later on in the function. This leaves the database in an inconsistent state, to put it politely. So, how to debug this? I've seen similar questions, but they had to do with using MySQL, whereas this kind of broken transaction is not supposed to happen with Postgres. There were tickets on the Django Trac about this issue from years back, but they were supposedly fixed/resolved. Could any Djangonauts out there provide enlightenment please?
See this ticket: https://code.djangoproject.com/ticket/6669
I think for now you'll just need to call transaction.rollback() explicitly when you get an IntegrityError
I don't know if this applies to you, but the problem that brought me here was a failure to read the manual with regard to Django testing.
If you are testing code with transactions in it you need to use TransactionTestCase instead of TestCase, failure to do so will result in the tests seeing the behavior you describe.
I'm using the entity framework.
In one of my unit tests I have a line like:
this.Set<T>().Add(entity);
On executing that line I get:
System.InvalidOperationException : The model backing the
'InvoiceNewDataContext' context has changed since the database was
created. Either manually delete/update the database, or call
Database.SetInitializer with an IDatabaseInitializer instance. For
example, the DropCreateDatabaseIfModelChanges strategy will
automatically delete and recreate the database, and optionally seed it
with new data.
Well I've actually deleted the database and removed the connection string.
I'm surprised this error is happening on adding as I wouldn't expect it to happen until I saved the data and it discovered there was no database.
In previous projects/solutions I created during unit tests I have been able to add to the context for test purposes without actually calling SaveChanges.
Would anyone know why this would be happening in my latest projects/solutions?
Are you sure it really didn't use database in your previous projects? If you do not specify any connection string it will silently use a default one to SQLExpress database with local .mdf file so make sure that isn't happening now.