I have multiple tables that have a UNIQUE key (ID) along with other attributes.
| TABLE 1 |
|-----|------------|--------------------|
|ID |NAME | DESCRIPTION |
|-----|------------|--------------------|
|01 | JOB01 |THIS IS A SAMPLE JOB|
|02 | JOB26 |ANOTHER SAMPLE JOB |
| TABLE 2 |
|-----|------------|------------------------|
|ID |NAME | DESCRIPTION |
|-----|------------|------------------------|
|43 | TRIG01 |THIS IS A SAMPLE TRIGGER|
|56 | TRIG26 |ANOTHER SAMPLE TRIGGER |
And I have a another table that has reviews. A review can be on a job, on a trigger, etc. on such other activities (activity tables). Any activity can have one or multiple reviews.
Here ID explains the relationship with other tables (table 1 and table 2).
Such as JOB01 has 2 reviews.
| TABLE 3 (REVIEW) |
|-----|------------|-------------------------------|
|ID |NAME | DESCRIPTION |
|-----|------------|-------------------------------|
|01 | REVIEW01 |THIS IS A SAMPLE REVIEW |
|01 | REVIEW02 |ANOTHER SAMPLE REVIEW ON JOB 01|
|02 | REVIEW89 |SAMPLE REVIEW ON JOB 02 |
|43 | REVIEW21 |ANOTHER SAMPLE REVIEW ON TRIG01|
|43 | REVIEW29 |ANOTHER SAMPLE REVIEW ON TRIG01|
Is it okay to create relationships like this in my data model for data analysis (I want to see the reviews related to my jobs, my trigers, etc.). Would it affect performance? (Considering approx. hundred thousand rows in each table)
Is it better to split table 3 up by the activity types no matter how many tables are formed?
Or do multiple 1:M relationships like this work fine.
Related
I’ve a question about the usage of prefetch_related. Based on my understanding I need to use prefetch_related for reverse foreign key relationships
As an example I have a User(id, name) model and SchoolHistory(id, start_date, school_name, user_id[FK user.id]) model. A user can have multiple school history records.
If I’m querying the database using the following SQL query:
SELECT
user.id,
name,
start_date,
school_name
FROM user
INNER JOIN school_history ON school_history.user_id = user.id
the expected result would be:
| User ID | Name | Start Date | School |
| 1 | Human | 1/1/2022 | Michigan |
| 1 | Human | 1/1/2021 | Wisconsin |
| 2 | Alien | | |
This is the current result that I’m getting instead with ORM and a serializer:
| User ID | Name | school_history
| 1 | Human | [{start_date:1/1/2022 , school:Michigan}, {start_date:1/1/2021 , school:Wisconsin}] |
| 2 | Alien | [] |
This is the ORM query that I’m using:
User.objects.prefetch_related(
Prefetch(
‘school_history’
query_set=SchoolHistory.objects.order_by(‘start_date’)
)
)
Is there a way for the ORM query to have a similar result as SQL? I want multiple rows if there are multiple schools associated with that user
I have 2 tables in powerbi, one contains all transactions to and from people (each client identified with an id, where "I" can be either the receiver or sender of $) and the other is the detail for each client.
Table 1 would look something like
| $ | sender id | receiver id |
|---|-----------| ------------|
| 10| 1 | 2 |
| 15| 1 | 3 |
| 20| 1 | 2 |
| 15| 3 | 1 |
| 10| 3 | 1 |
| 25| 2 | 1 |
| 10| 1 | 2 |
The second table contains sender id and name:
| id | name |
|----|-------|
| 1 | "me" |
| 2 | John |
| 3 | Susan |
The expected result is something like (not necesarily in a table, just to show)
| $ sent | $ received | Balance|
|--------|------------|--------|
| 55 | 45 | +10 |
And in a filter have "John" and "Susan" so when i Select one of them i could see $ sent, $received and balance for each of them.
The problem of course is that i end up with one active and one inactive relationship so if i apply such a filter i end up with 0 in sender/receiver and the whole value in the other (depending which is made active and inactive) and if i make another table that's "id sender"+"name sender" then i cant filter all at once.
Is it possible to do this?
I hope this is kinda understandable
You will need to add 2 columns to your user table
received = CALCULATE(SUM(T1[$]), Filter(T1, UserTable[id] = T1[reveicer id]))
The same you can do for send. Now in your visual, use the new columns.
Enjoy!
after going around a bit I found a way to solve this, probably not the most orthodox way to do it, but it works.
What I did is to add 2 columns to my sales table, one was labeled "movement" and in sql it is just a 'case' where when the receiver is 'me' its "Charged" and when the receiver is 'not-me' its "Payment", then i added a column with a case so it would always bring me the 'not-me' id, and i used that for may relationship with my users table.
Then I just added filters in my cards making one a "Payment" card and the other a "Charged" card.
This is all following the previous example, it was actually just a bit more tricky as I could actually have a payment from me to myself, but thats just another "case" for when it was 'me-me'
Hope this is understandable, english is not my first language and the information i actually used is partially confidential so i had to make the above example.
thanks all and have a nice day.
I have table in RDS which consists two columns id and user activity at some time exactly values active/away.I get user activity every day so I need to add user activity column every day to that table.Any ideas how to do it?Now I have table with first two columns in RDS,but I am in stuck with how to add columns to that table
+-------------+------------+------------+
| id | 2020-08-13 | 2020-08-14 |
-----------------------------------------
| 12345 | active | away |
You could use an alter table ... add column, but this is not the right way to solve the problem.
In a relational database, you add additional rows for repeated data, not additional columns. So your table should look like this:
+-------------+-------------+------------+
| id | status_date | status |
------------------------------------------
| 12345 | 2020-08-13 | active |
| 12345 | 2020-08-14 | away |
Then you add a new row using an insert.
I have one workflow which contain five sessions. I am looking for a query by using informatica repository tables/views which give me output like below. I am not able to get a query which give me desired result.
workflow-names session-names source-count target-count session-start time session-end time.
If you have access to Repository metadata tables, then you can use below query
Metadata Tables used in query:
OPB_SESS_TASK_LOG
OPB_TASK_INST_RUN
OPB_WFLOW_RUN
Here the Repository user is INFA_REP, and workflow name is wf_emp_load.
SELECT w.WORKFLOW_NAME,
t.INSTANCE_NAME,
s.SRC_SUCCESS_ROWS,
s.TARG_SUCCESS_ROWS,
t.START_TIME,
t.END_TIME
FROM INFA_REP.OPB_SESS_TASK_LOG s
INNER JOIN INFA_REP.OPB_TASK_INST_RUN t
ON s.INSTANCE_ID=t.INSTANCE_ID
AND s.WORKFLOW_RUN_ID=t.WORKFLOW_RUN_ID
INNER JOIN INFA_REP.OPB_WFLOW_RUN w
ON w.WORKFLOW_RUN_ID=t.WORKFLOW_RUN_ID
WHERE w.WORKFLOW_RUN_ID =
(SELECT MAX(WORKFLOW_RUN_ID)
FROM INFA_REP.OPB_WFLOW_RUN
WHERE WORKFLOW_NAME='wf_emp_load')
ORDER BY t.START_TIME
Output
+---------------+---------------+------------------+-------------------+--------------------+--------------------+
| WORKFLOW_NAME | INSTANCE_NAME | SRC_SUCCESS_ROWS | TARG_SUCCESS_ROWS | START_TIME | END_TIME |
+---------------+---------------+------------------+-------------------+--------------------+--------------------+
| wf_emp_load | s_emp_load | 14 | 14 | 10-JUN-18 18:31:24 | 10-JUN-18 18:31:26 |
| wf_emp_load | s_emp_revert | 14 | 14 | 10-JUN-18 18:31:27 | 10-JUN-18 18:31:28 |
+---------------+---------------+------------------+-------------------+--------------------+--------------------+
I am parsing the USDA's food database and storing it in SQLite for query purposes. Each food has associated with it the quantities of the same 162 nutrients. It appears that the list of nutrients (name and units) has not changed in quite a while, and since this is a hobby project I don't expect to follow any sudden changes anyway. But each food does have a unique quantity associated with each nutrient.
So, how does one go about storing this kind of information sanely. My priorities are multi-programming language friendly (Python and C++ having preference), sanity for me as coder, and ease of retrieving nutrient sets to sum or plot over time.
The two things that I had thought of so far were 162 columns (which I'm not particularly fond of, but it does make the queries simpler), or a food table that has a link to a nutrient_list table that then links to a static table with the nutrient name and units. The second seems more flexible i ncase my expectations are wrong, but I wouldn't even know where to begin on writing the queries for sums and time series.
Thanks
You should read up a bit on database normalization. Most of the normalization stuff is quite intuitive, but really going through the definition of the steps and seeing an example helps understanding the concepts and will help you greatly if you want to design a database in the future.
As for this problem, I would suggest you use 3 tables: one for the foods (let's call it foods), one for the nutrients (nutrients), and one for the specific nutrients of each food (foods_nutrients).
The foods table should have a unique index for referencing and the food's name. If the food has other data associated to it (maybe a link to a picture or a description), this data should also go here. Each separate food will get a row in this table.
The nutrients table should also have a unique index for referencing and the nutrient's name. Each of your 162 nutrients will get a row in this table.
Then you have the crossover table containing the nutrient values for each food. This table has three columns: food_id, nutrient_id and value. Each food gets 162 rows inside this table, oe for each nutrient.
This way, you can add or delete nutrients and foods as you like and query everything independent of programming language (well, using SQL, but you'll have to use that anyway :) ).
Let's try an example. We have 2 foods in the foods table and 3 nutrients in the nutrients table:
+------------------+
| foods |
+---------+--------+
| food_id | name |
+---------+--------+
| 1 | Banana |
| 2 | Apple |
+---------+--------+
+-------------------------+
| nutrients |
+-------------+-----------+
| nutrient_id | name |
+-------------+-----------+
| 1 | Potassium |
| 2 | Vitamin C |
| 3 | Sugar |
+-------------+-----------+
+-------------------------------+
| foods_nutrients |
+---------+-------------+-------+
| food_id | nutrient_id | value |
+---------+-------------+-------+
| 1 | 1 | 1000 |
| 1 | 2 | 12 |
| 1 | 3 | 1 |
| 2 | 1 | 3 |
| 2 | 2 | 7 |
| 2 | 3 | 98 |
+---------+-------------+-------+
Now, to get the potassium content of a banana, your'd query:
SELECT food_nutrients.value
FROM food_nutrients, foods, nutrients
WHERE foods_nutrients.food_id = foods.food_id
AND foods_nutrients.nutrient_id = nutrients.nutrient_id
AND foods.name = 'Banana'
AND nutrients.name = 'Potassium';
Use the second (more normalized) approach.
You could even get away with fewer tables than you mentioned:
tblNutrients
-- NutrientID
-- NutrientName
-- NutrientUOM (unit of measure)
-- Otherstuff
tblFood
-- FoodId
-- FoodName
-- Otherstuff
tblFoodNutrients
-- FoodID (FK)
-- NutrientID (FK)
-- UOMCount
It will be a nightmare to maintain a 160+ field database.
If there is a time element involved too (can measurements change?) then you could add a date field to the nutrient and/or the foodnutrient table depending on what could change.