Doctrine 2.1 - Recording activities throughout entities - doctrine-orm

i want to be able to record actions a logged in user does... persist / updates etc
i have set up discriminators etc and it works perfect however, it only records on all new persisted data...
so i have info on a table called user_actions,
1 - Added a new customer,
2 - Added a new memo
etc
however, it doesnt record any updates to entities on my db...
such as 1 - Updated user - id 1
...
i am thinking of dumping the discriminator superclass and use a the old way to record,,,... like create a table with the fields:
id | action type | description | user ID | date
im not sure, what is the best way to log all transactions in doctrine 2.1?
thanks

Have you consider of HasLifecycleCallbacks? You can track not only PostPersist but also PostUpdate and PostRemove(or even Pre*)

Related

how do i save transaction log to database

good day guys,
I need your opinion on this problem. although am using Django for my project but am sure this problem is not tie to django alone. So, I am working on these services booking system. In my database I have 3 tables listed below:
User_Table with field
• Id
• Username
• Fullname
Services_Table with field
• Id
• name
• Price
Transaction_Table with field
• Id
• User_id
• Services_id (many to many relationship)
When this services get booked, I send it to the transaction table using the user_id and services_id as foreign key for User Table and Services Table meaning it’s the id values that are saved.
When a client want to view his or her transaction history, I provide it by running the query:
price = transaction.service.price
service_name = transaction.service.name
total_cost = sum of all services selected
as not to present the user with id values for price and service_name.
now here is my problem, in future, if the admin decide to change the name and price of a service and the client goes back to view his old transaction log, the new value get populated cus I referenced them by ids which is not what I want, I want the client to see the old value as a receipt would be even when I updated the services table.
What do you suggest I do in this case?
You should record every transaction made and record the price and amount it totalled up to at the moment the txn was made. Transaction model should have fields to record every detail about the transaction.
This means:
You would have a txn_service table, where all services in a transaction are saved and linked to the transaction table.

How to design a DynamoDB table schema

I am doing my best to understand DynamoDB data modeling but I am struggling. I am looking for some help to build off what I have now. I feel like I have fairly simple data but it's not coming to me on what I should do to fit into DynamoDB.
I have two different types of data. I have a game object and a team stats object. A Game represents all of the data about the game that week and team stats represents all of the stats about a given team per week.
A timeId is in the format of year-week (ex. 2020-9)
My Access patterns are
1) Retrieve all games per timeId
2) Retrieve all games per timeId and by TeamName
3) Retrieve all games per timeId and if value = true
4) Retrieve all teamStats per timeId
5) Retrieve all teamStats by timeId and TeamName
My attempt at modeling so far is:
PK: TeamName
SK: TimeId
This is leading me to have 2 copies of games since there is a copy for each team. It is also only allowing me to scan for all teamStats by TimeId. Would something like a GSI help here? Ive thought maybe changing the PK to something like
PK: GA-${gameId} / TS-${teamId}
SK: TimeId
Im just very confused and the docs aren't helping me much.
Looking at your access patterns, this is a possible table design. I'm not sure if it's going to really work with your TimeId, especially for the Local Secondary Index (see note below), but I hope it's a good starting point for you.
# Table
-----------------------------------------------------------
pk | sk | value | other attributes
-----------------------------------------------------------
TimeId | GAME#TEAM{teamname} | true | ...
TimeId | STATS#TEAM{teamname} | | ...
GameId | GAME | | general game data (*)
TeamName | TEAM | | general team data (*)
# Local Secondary Index
-------------------------------------------------------------------------------
pk from Table as pk | value from Table as sk | sk from Table + other attributes
-------------------------------------------------------------------------------
TimeId | true | GAME#Team{teamname} | ...
With this Table and Local Secondary Index you can satisfy all access patterns with the following queries:
Retrieve all games by timeId:
Query Table with pk: {timeId}
Retrieve all games per timeId and by TeamName
Query table with pk: {timeId}, sk: GAME#TEAM{teamname}
Retrieve all games per timeId and if value = true
Query LSI with pk: {timeId}, sk: true
Retrieve all teamStats per timeId
Query table with pk: {timeId}, sk: begins with 'STATS'
Retrieve all teamStats by timeId and TeamName
Query table with pk: {timeId}, sk: STATS#TEAM{teamname}
*: I've also added the following two items, as I assume that there are cases where you want to retrieve general information about a specific game or team as well. This is just an assumption based on my experience and might be unnecessary in your case:
Retrieve general game information
Query table with pk: {GameId}
Retrieve general team information
Query table with pk: {TeamName}
Note: I don't know what value = true stands for, but for the secondary index to work in my model, you need to make sure that each combination of pk = TimeId and value = true is unique.
To learn more about single-table design on DynamoDB, please read Alex DeBrie's excellent article The What, Why, and When of Single-Table Design with DynamoDB.

How to interact with Existing database with Model through template function in Django

I have an existing table called empname in my postgres database
(Projectid,empid,name,Location) as
(1,101,Raj,India),
(2,201,David,USA)
So in the app console it will have like the following
1)Projectid=Textbox
2)Ops =(view,insert,Edit)-Dropdown
Case1:
So if i write project id as 1 and select View Result:It will display all the records for Projectid =1(Here 1 record)
Case2:
If i write projectid as 3 and select insert it will ask for all the inputs like empid,name,address and based on that it will update the table .
Case3:
If i write projectid as 2 and select edit.Then it will show all the field for that id and user can edit any column and can save which will update the records in backend for the existing table
If there is not data found for the respective project id then it will display no records found
Please help me on this as I am stuck up with models
Once you have your models created, the next task should be the form models. I can identify atleast 3 form classes that you will need to create. One to display the information(case 1), another to collect information(case 2) and the last class to edit the information. Wire up the form to the views and add the urls.
A good reference could be a django a user registration form since it will have all the three cases taken care of.http://www.tangowithdjango.com/book17/chapters/login.html

Cassandra, schema and process design for concurrent writes

This is a long-winded question. It is about Cassandra schema design. I'm here to get inputs from your respected experts on a use-case I'm working on. All inputs, suggestions, and critics are welcome. Here goes my question.
We would like to collect REVIEWS from our USERS about some PAPERS we are about to publish. For each paper we seek for 3 reviews. But We send out review invites to 3*2= 6 users. All 6 users can submit their reviews to our system, but only the first 3 count; and these first 3 reviewers will get reward their work.
In our Cassandra DB, there are three tables: USER, PAPER and REVIEW. The USER and PAPER tables are simple: each user corresponds to a row in the USER table with an unique USER_ID; similarly, each paper has a unique PAPER_ID in the PAPER table.
The REVIEW table looks like this
CREATE TABLE REVIEW(
PAPER_ID uuid,
USER_ID uuid,
REVIEW_CONTENT text,
PRIMARY KEY(PAPER_ID, USER_ID)
);
We use PAPER_ID as the partition key of the REVIEW table so that all reviews of a given paper is stored in a single Cassandra row. For each paper we have, we pick up 6 users, insert 6 entries into the REVIEW table and send out 6 invites to those users. So, for paper "P1", there are 6 entries in the REVIEW table that look like this
----------------------------------------------------
PAPER_ID | USER_ID | REVIEW_CONTENT |
----------------------------------------------------
P1 | U1 | null |
----------------------------------------------------
P1 | U2 | null |
----------------------------------------------------
P1 | U3 | null |
----------------------------------------------------
P1 | U4 | null |
----------------------------------------------------
P1 | U5 | null |
----------------------------------------------------
P1 | U6 | This paper ... |
---------------------------------------------------
... | ... | ... |
Users submit review via a web browser using http. At the backend, we use the following process to handle submitted reviews (use paper "P1" as an example):
Use partition key "P1" to get all 6 entries out from the REVIEW table.
Find out how many of these 6 entries have non-null values at the REVIEW_CONTENT column (non-null values indicate that the corresponding user has already submitted his review. For example, in the above table, user "U6" has submitted his review, while other 5 have not yet).
If this number >=3, we already had enough reviews, return to the current reviewer with a message like "Thanks, we already had enough reviews."
If this number < 2, save the current review to the corresponding entry in the REVIEW table, return to the reviewer with a message like "Your review has been accepted." (E.g. If the current reviewer is "U1", then fill the REVIEW_CONTENT column of "P1, U1" entry with the current review content.)
If this number =2, this is the most complicated the case as the current submission is the last one we'll accept. In this case, we first save the current review to the REVIEW table, then we find the ids of all three users that have submitted reviews (including the current user), record their ids into a transaction table to pay them rewards later.
But this process does not work. The problem is that it does not handle concurrent submissions correctly. Consider the following case: two users have already submitted their reviews, and meanwhile 3 other users are submitting their reviews via three concurrent process shown above. At step 5, each of the three will think he is the 3rd and last submitter and insert new records into the transaction table. This leads to a double counting: a single user may be rewarded more than once for the same review he submitted.
Another problem of this process is that it may never reach to step 5. Let's say there is no submission in the REVIEW table, and 4 users submit their reviews at the same time. All of them saved their reviews at step 4. After this, later submitter will always be rejected as there are 4 accepted reviews already. But since we never reach step 5, no ids will be recorded into the transaction table and users will never get any rewards.
So here comes my question: How should I handle my use case using Cassandra as the back-end DB? Will Cassandra COUNTER help? If so, how? I have not thought through how to use COUNTER yet, but this blog (http://aphyr.com/posts/294-call-me-maybe-cassandra) warned that Cassandra COUNTER is not safe (quote "Consequently, Cassandra counters will over- or under-count by a wide range during a network partition.") Will Cassandra's Compare and Set (CAS) feature help? If so, how? Again the save blog warned that "Cassandra lightweight transactions are not even close to correct."
Rather than creating empty entries in your review table, I would consider leaving it empty and only filling it as the reviews are submitted. To handle concurrency, add a timeuuid field as a sorting key:
CREATE TABLE review(
paper_id uuid,
submission_time timeuuid,
user_id uuid,
content text,
PRIMARY KEY (paper_id, submission_time)
);
When a user makes their submission, add the entry to the table. Then AFTER the write is successful, query the table (on only the paper_id) and find out if the user's id is one of the first three. Respond to the user accordingly. Since you're committed to a small set of reviewers, the extra overhead of fetching all the reviews should be minimal (especially since you wouldn't need to include the content column in the query).
If you need to track who's reviewing the papers, add a set of user ids to the paper table and write the six user ids there.

Grouping Custom Attributes in a Query

I have an application that allows for "contacts" to be made completely customized. My method of doing that is letting the administrator setup all of the fields allowed for the contact. My database is as follows:
Contacts
id
active
lastactive
created_on
Fields
id
label
FieldValues
id
fieldid
contactid
response
So the contact table only tells whether they are active and their identifier; the fields tables only holds the label of the field and identifier, and the fieldvalues table is what actually holds the data for contacts (name, address, etc.)
So this setup has worked just fine for me up until now. The client would like to be able to pull a cumulative report, but say state of all the contacts in a certain city. Effectively the data would have to look like the following
California (from fields table)
Costa Mesa - (from fields table) 5 - (counted in fieldvalues table)
Newport 2
Connecticut
Wallingford 2
Clinton 2
Berlin 5
The state field might be id 6 and the city field might be id 4. I don't know if I have just been looking at this code way to long to figure it out or what,
The SQL to create those three tables can be found at https://s3.amazonaws.com/davejlong/Contact.sql
You've got an Entity Attribute Value (EAV) model. Use the field and fieldvalue tables for searching only - the WHERE caluse. Then make life easier by keeping the full entity's data in a CLOB off the main table (e.g. Contacts.data) in a serialized format (WDDX is good for this). Read the data column out, deserialize, and work with on the server side. This is much easier than the myriad of joins you'd need to do otherwise to reproduce the fully hydrated entity from an EAV setup.