I'm migrating the DBAL of a Zend Framework 3 application to Doctrine. The data retrieving part is completed and working. Now it's the turn of the data saving.
There is a more or less complicated (about 10 level) dynamic structure of objects/entities. Then there is a structure of Fieldsets, that reflects the objects' structure. The hydrating of the Form's data to the nested object/entity is a matter of Zend\Form. So I get from the Form a complete object ready for saving.
Currently the saving is working as follows:
There is a structure of Mapper classes, that reflects the objects'/entities' structure. Means: Every entity has its Mapper. Every Mapper knows its "sub-Mappers". If the entity FooEntity processed by a mapper FooMapper has a property of type BarEntity, the FooMapper#save(...) will call the BarMapper#save(...) to persist it. Then if the entity BarEntity processed by a mapper BarMapper has a property of type BuzEntity[], the BarMapper#save(...) will call the BuzMapper#save(...) in a loop to persist the data. And so forth...
How to replace all that hardly maintainable stuff by using Doctrine's functionality and handle this cascading saving in a more elegant way?
Some saving functionality will be eliminated by using cascade={"persist"}. But my problem is, that for all the remaining parts I have no better ideas than keeping the hierarchical Mappers structure just replacing of the Zend\Db's insert(...) and update(...) by the Doctrine's persist(...).
Related
Repository is like a collection of domain objects. So it should not return DTOs or anything that is not a domain object.
But, Suppose your domain model has 20 fields with large amount of data and you want to use only 2 fields here, you have to fetch the whole row first and then map it, which is very inefficient.
It depends. If you are modeling with DDD and CQRS then you should return Aggregates for commands and ViewModels for queries. You can split repos in reads and writes, where reads are used for serving views for example, or REST APIs, case in which you would have DTOs and not ViewModels, thus you only return the data (fields) that you need from the query.
In the write stack you should have a single method that returns, and the return type should be the Aggregate of that specific repository (use lazy loading if you don't want to load all related child collections)
TAggregate GetById(object id)
I'm learning SQLAlchemy and using a database where multiple table lookups are needed to find a single piece of data.
I'm trying to find the best (most efficient and Pythonic) way to map the multiple lookups to a single SQLAlchemy object or reusable python method.
Ultimately, there will be dozens if not hundreds of mapped object such as these, so something like a .map file might be handy.
I.e. (Using pseudocode)
If I want to find the data 'Status' from 'Patient Name' have to use three tables.
Instead of writing a function for every potential 'this' to 'that' data request, is there an SQLAlchemy or Pythonic way to make the mappings?
I CAN make new, temporary SQLAlchemy Tables to store data. I am NOT at liberty to change the database I'm reading from. I'm hoping to reduce the number of individual calls to the database, because it is remote and slow.
I'm not sure a data join will work, because the Primary Keys, Foreign Keys and Column names are inconsistent in the database. But, I don't really know how to make select-joins in SQLAlchemy.
Perhaps I need to create a new table, with relationships to those three previous tables? But I'm not understanding the relationships well.
Can these tables be auto-generated from a map.ini file?
EDIT:
I might add, that some of these relationships could be one to many. I.e. a patient may be associated with more than one statusID...such as...
I’m trying to display some tabular data with a QTableView subclass and a QAbstractTableModel subclass. I can’t get the data to show up, but before I start really pounding on it I want to make sure that I’m using models in the way they were intended.
The data layer of my application periodically receives new data and distributes the data to the other parts of the application by calling slots like
void new_data_received(QSharedPointer<Measurement> measurement)
where Measurement is my data class. This allows the data to be passed around without being copied (some of my data classes are very large). Measurements are immutable; the table view that displays them doesn’t allow any editing.
Measurement is a subclass of QAbstractTableModel, so whenever I receive a new measurement I call set_model on my QTableView subclass instance with the new data as a parameter. (In the time before the first measurement is received there is no model set on the table view.)
Are Qt’s view classes intended to be used like this, with a new model being set every so often? Or should there be just one instance of the model class, with the same lifetime as the table view, that receives the new data and emits dataChanged? The latter seems like it adds unnecessary structure—at least in my case—but maybe that’s the way the system was designed to be used.
I don't think your Measurement class should be a subclass of QAbstractTableModel. It should represent raw data instead. So maybe a struct with some parameters or a list of structs will be a right type for your data class.
Then you should implement a custom model where incoming data are added to. So, when new data arrives that model will automatically update all the views connected to it. In this case new data affects directly your model only, not the views.
I suppose resetting view's model every time is not the right way to do what you want.
Can Django support Oracle nested tables or varrays or collections in some manner? Asking just for completeness as our project is reworking the data model, attempting to move away from EAV organization, but I don't like creating a bucket load of dependent supporting tables for each main entity.
e.g.
(not the proper Oracle syntax, but gets the idea across)
Events
eventid
report_id
result_tuple (result_type_id, result_value)
anomaly_tuple(anomaly_type_id, anomaly_value)
contributing_factors_tuple(cf_type_id, cf_value)
etc,
where the can be multiple rows of the tuples for one eventid
each of these tuples can, of course exist as separate tables, but this seems to be more concise. If it 's something Django can't do, or I can't modify the model classes to do easily, then perhaps just having django create the extra tables is the way to go.
--edit--
I note that django-hstore is doing something very similar to what I want to do, but using postgresql's hstore capability. Maybe I can branch off of that for an Oracle nested table implementation. I dunno...I'm pretty new to python and django, so my reach may exceed my grasp in this case.
Querying a nested table gives you a cursor to traverse the tuples, one member of which is yet another cursor, so you can get the rows from the nested table.
I am in the process of cleaning a database. These process involves changing the format of certain fields and getting rid of some data integrity issues.
I developed a program with Spring Data 1.1 to process the records in batches. The problem arises with 2 entities in a #OneToOne relationship. The record for Entity B does not exist although Entity A has a reference to it. My job is to clear the reference to Entity B if that is the case.
The question is: should I pre-process the data to clean this or can I adjust Spring Data or JPA settings to put null in the field if the Entity is not found?
It is "normal" - with this data - to have a FK in Entity A that does not exist in Entity B, so I want to handle this in my code and not have to pre-process the data with an additional step or other tool. The data will be arriving in batches so any pre-processing makes things more complicated for the user.
In summary, I want Spring Data to set the field to null and continue the process instead of getting an org.springframework.orm.jpa.JpaObjectRetrievalFailureException: Unable to find....
Perhaps you are looking for the #NotFound annotation?
Here is a post that talks about it.
I have had the same problem because my orm mapping has a wrong #One-to-one unidirectional relationship.