Converting a string variable to binary while having missing values - if-statement

I'm having trouble converting a string variable regarding depression to a binary one.
I currently have a variable in my data set called dp1 which indicates whether an individual responded "yes", "no", or "don't know" on a questionnaire. I'm trying to generate a binary variable from this where yes=1, no=0 and don't know=missing.

There are several ways to do this. As you should want informative text for your variable to show in graphs and tables, the best approach is possibly something like this using encode:
* Example generated by -dataex-. For more info, type help dataex
clear
input str10 dp1
"yes"
"no"
"don't know"
end
label def dp1 0 "no" 1 "yes" .a "don't know"
encode dp1, gen(wanted) label(dp1)
list
+-------------------------+
| dp1 wanted |
|-------------------------|
1. | yes yes |
2. | no no |
3. | don't know don't know |
+-------------------------+
list, nolabel
+---------------------+
| dp1 wanted |
|---------------------|
1. | yes 1 |
2. | no 0 |
3. | don't know .a |
+---------------------+

Related

kmatch: Check propensity scores and individuals who are matched

I'm using kmatch in Stata. The reason why I use kmatch is to use the command ematch to match exactly on a specific variable in addition to the propensity score matching. Here is my code:
kmatch ps treatment age sex edu (outcome), ematch(level) comsup
I think kmatch is different from pscore and psmatch2 in that propensity scores will not be automatically stored in the dataset. I wonder if there is a way to save these propensity scores and to check which individuals are included in the matched sample.
The answer is in the help file, help kmatch. Add generate[(spec)] as an option to store the propensity scores as _KM_ps. Other helpful matching results also have the _KM_ prefix. wgenerate[(spec)] generates variables containing the ready-to-use matching weights. idgenerate[(prefix)] generates variables containing the IDs (observations numbers) of the matched controls.
Here is an example.
webuse cattaneo2, clear
kmatch ps mbsmoke mmarried mage fbaby medu (bweight), ///
generate(kpscore) wgenerate(_matched) idgenerate(_controlid) ate
Try this to compare results from kmatch and teffects psmatch, keeping only the propensity scores from each.
webuse cattaneo2, clear
tempfile temp1 temp2
keep mbsmoke mmarried mage fbaby medu bweight
gen id = _n
save temp1, replace
teffects psmatch (bweight) (mbsmoke mmarried mage fbaby medu), ///
ate generate(_pscore)
predict te_pscore, ps
keep te_pscore id
replace te_pscore = 1 - te_pscore
save temp2, replace
use temp1
kmatch ps mbsmoke mmarried mage fbaby medu (bweight), generate(kpscore) ate
rename _KM_ps k_pscore
keep k_pscore id
save temp3, replace
merge 1:1 id using temp2
drop _merge
list in 1/10
+---------------------------+
| id k_pscore te_psc~e |
|---------------------------|
1. | 1 .13229635 .1322963 |
2. | 2 .4204439 .4204439 |
3. | 3 .22490795 .2249079 |
4. | 4 .16333027 .1633303 |
5. | 5 .11024706 .1102471 |
|---------------------------|
6. | 6 .25395923 .2539592 |
7. | 7 .16283038 .1628304 |
8. | 8 .10881813 .1088181 |
9. | 9 .10988829 .1098883 |
10. | 10 .11608692 .1160869 |
+---------------------------+

Single Filter for PowerBI

I have 2 tables in powerbi, one contains all transactions to and from people (each client identified with an id, where "I" can be either the receiver or sender of $) and the other is the detail for each client.
Table 1 would look something like
| $ | sender id | receiver id |
|---|-----------| ------------|
| 10| 1 | 2 |
| 15| 1 | 3 |
| 20| 1 | 2 |
| 15| 3 | 1 |
| 10| 3 | 1 |
| 25| 2 | 1 |
| 10| 1 | 2 |
The second table contains sender id and name:
| id | name |
|----|-------|
| 1 | "me" |
| 2 | John |
| 3 | Susan |
The expected result is something like (not necesarily in a table, just to show)
| $ sent | $ received | Balance|
|--------|------------|--------|
| 55 | 45 | +10 |
And in a filter have "John" and "Susan" so when i Select one of them i could see $ sent, $received and balance for each of them.
The problem of course is that i end up with one active and one inactive relationship so if i apply such a filter i end up with 0 in sender/receiver and the whole value in the other (depending which is made active and inactive) and if i make another table that's "id sender"+"name sender" then i cant filter all at once.
Is it possible to do this?
I hope this is kinda understandable
You will need to add 2 columns to your user table
received = CALCULATE(SUM(T1[$]), Filter(T1, UserTable[id] = T1[reveicer id]))
The same you can do for send. Now in your visual, use the new columns.
Enjoy!
after going around a bit I found a way to solve this, probably not the most orthodox way to do it, but it works.
What I did is to add 2 columns to my sales table, one was labeled "movement" and in sql it is just a 'case' where when the receiver is 'me' its "Charged" and when the receiver is 'not-me' its "Payment", then i added a column with a case so it would always bring me the 'not-me' id, and i used that for may relationship with my users table.
Then I just added filters in my cards making one a "Payment" card and the other a "Charged" card.
This is all following the previous example, it was actually just a bit more tricky as I could actually have a payment from me to myself, but thats just another "case" for when it was 'me-me'
Hope this is understandable, english is not my first language and the information i actually used is partially confidential so i had to make the above example.
thanks all and have a nice day.

Single table or Multiple tables for hierarchy data

I must implement this following hierarchy data:
Category (id, name, url)
SubCategory (id, name, url)
SubSubCategory (id, name, url)
NOTE that this is many-to-many relationship. EG: Each node can have multiple parents or children. There will be no circulation relationship (thank GOD). Only some SubSubCategory may belong to multiple SubCategory.
My implementation: I use single table for this
Cat (id, type(category, subcategory, subsubcategory), name, url)
CatRelation (id, parent_id, child_id, pre_calculated_index for tree retrieval)
pre_calculated_index can be left right implementation of modified preorder tree traversal [1, 2] or a path in my implementation. This pre_calculated_index is calculated when adding child to one node so that when you retrieve a tree you only need to sort by this field and avoid recursive query.
Anyway my boss argued that this implementation is not ideal. He suggests having each table for each type of category, and then have a pivot tables to link them:
Category (id, name, url)
SubCategory (id, name, url)
SubSubCategory (id, name, url)
Category_SubCategory(category_id, sub_category_id)
SubCategory_SubSubCategory(sub_category_id, sub_sub_category_id)
When you retrieve a tree, you only need to join all tables. His arguments is that later when you add some attribute to any category type you don't need and null field in single table implementation. And the pre_calculated_index may get wrong since it is calculated in code.
Which one should I follow? Which has better performance?
I use django and postgreSQL.
PS: More detail on my pre_calculated_index implementation:
Instead of left and right for each node I add a path (string, unique, indexed) value to the CatRelation: root node will have `path = '.'
child node when added to CatRelation will have path = parent_path + '.' So when you sort by this path, you get everything in tree order. Examples:
Cat
| id | name | url |
|----|------------|-----|
| 1 | Cat1 | |
| 2 | Subcat1 | |
| 3 | Subcat2 | |
| 4 | Subcat3 | |
| 5 | Subsubcat1 | |
| 6 | Subsubcat2 | |
| 7 | Subsubcat3 | |
CatRelationship Left right equivalent
| id | parent_id | child_id | path | |lft |rght|
|---- |----------- |---------- |-------- | |----|----|
| 1 | null | 1 | 1. | | 1 | 14 |
| 2 | 1 | 2 | 1.2. | | 2 | 3 |
| 3 | 1 | 3 | 1.3. | | 4 | 11 |
| 4 | 1 | 4 | 1.4. | | 12 | 13 |
| 5 | 3 | 5 | 1.3.5. | | 5 | 6 |
| 6 | 3 | 6 | 1.3.6. | | 7 | 8 |
| 7 | 3 | 7 | 1.3.7. | | 9 | 10 |
So when you sort by path (or order by left in modified preorder tree), you will got this nice tree structure without recursion:
| id | parent_id | child_id | path |
|---- |----------- |---------- |-------- |
| 1 | null | 1 | 1. |
| 2 | 1 | 2 | 1.2. |
| 3 | 1 | 3 | 1.3. |
| 5 | 3 | 5 | 1.3.5. |
| 6 | 3 | 6 | 1.3.6. |
| 7 | 3 | 7 | 1.3.7. |
| 4 | 1 | 4 | 1.4. |
And I can always build path dynamically using recursion:
WITH RECURSIVE CTE AS (
SELECT R1.*, CONCAT(R1.id, ".") AS dynamic_path
FROM CatRelation AS R1
WHERE R1.child_id = request_id
UNION ALL
SELECT R2.*, CONCAT(dynamic_path, R2.child_id, ".") AS dynamic_path
FROM CTE
INNER JOIN CatRelation AS R2 ON (CTE.child_id = R2.parent_id)
)
SELECT * FROM CTE;
This is not inheritance as someone suggested
Your question is somewhat opinionated because you ask for a comparison of two different approaches. I'll try to provide an answer although I'm afraid there is no unique true answer to it. In the rest of the answer I'll refer to your approach as solution A and to the approach suggested by your boss as solution B.
I would strongly suggest to follow the approach proposed by your boss:
because he's your boss! If something goes wrong later, nobody can blame you. You have followed the instructions.
because it follows the "The Zen of Python".
In particular the following rules of The Zen of Python apply:
Explicit is better than implicit.
The solution B is very explicit. The solution A is implicit.
Simple is better than complex.
The solution B is very simple and straightforward. The solution A is complex.
Sparse is better than dense.
The solution B is sparse. The solution A is dense and hides the obvious from the user.
Readability counts.
The solution B is very verbose, yet easy to read. The solution A requires more time and effort to understand.
You might measure performance in ms, your boss eventually thinks about performance in $. Getting a junior developer on board would require far less time with solution B. Time is expensive for enterprises.
Future changes in the models can be easier implemented. What if you'd like to add another field to Category which shouldn't (or doesn't need) to be present in SubCategory and SubSubCategory?
Testing (unit and functional) is much easier with solution B. It would require eventually more lines of code and be more verbose, but would be easier to read and understand.
The performance will vary and depend on the use case. How many records you'll have in the database? What's more critical: retrieving or inserting/updating? What makes the earlier more performant, might deteriorate the latter and vice versa.
I hope you have heard the sentence:
Premature optimization is the root of all evil.
given by Donald Knuth.
You'll take care about performance when there are concrete issues regarding it. It doesn't mean you shouldn't not invest any forethought concerning performance when designing your application.
You can cache queries, an option would be to use redis. Since you use PostgreSQL you could also use materialized views. But as I said I'd cross that bridge when I come to it.
EDIT:
You didn't mention anything else about any further models. I'd assume that when you have categories you'll have some entities, let's say products classified in those categories i.e. categorized. Here I'd give an example:
Category: Men
SubCategory: Sportswear
SubSubCategory: Running Shoes
Product: ACME speeedVX13 (fictive brand and model)
If you strictly follow this hieararchy and put a product only and only in SubSubCategory then the solution B is better.
But if you have a fictive product Sportskit ACME (running shoes, shorts and sleeveless shirt) that you can't put in SubSubCategory and need to put in SubCategory, skipping one level, then you might end up using something like generic relations.
In that case solution A is better.

Preserving data more than once

I am writing some code in Stata and I have already used preserve once. However, now I would like to preserve again, without using restore.
I know this will give an error message, but does it save up to the new preserve area?
No, preserving twice without restoring in-between simply throws an error:
sysuse auto, clear
preserve
drop mpg
preserve
already preserved
r(621);
However, you can do something similar using temporary files. From help macro:
"...tempfile assigns names to the specified local macro names that may be used as names for temporary files. When the program or do-file concludes, any
datasets created with these assigned names are erased..."
Consider the following toy example:
tempfile one two three
sysuse auto, clear
save `one'
drop mpg
save `two'
drop price
save `three'
use `two'
list price in 1/5
+-------+
| price |
|-------|
1. | 4,099 |
2. | 4,749 |
3. | 3,799 |
4. | 4,816 |
5. | 7,827 |
+-------+
use `one'
list mpg in 1/5
+-----+
| mpg |
|-----|
1. | 22 |
2. | 17 |
3. | 22 |
4. | 20 |
5. | 15 |
+-----+

How to store data with large number (constant) of properties in SQL

I am parsing the USDA's food database and storing it in SQLite for query purposes. Each food has associated with it the quantities of the same 162 nutrients. It appears that the list of nutrients (name and units) has not changed in quite a while, and since this is a hobby project I don't expect to follow any sudden changes anyway. But each food does have a unique quantity associated with each nutrient.
So, how does one go about storing this kind of information sanely. My priorities are multi-programming language friendly (Python and C++ having preference), sanity for me as coder, and ease of retrieving nutrient sets to sum or plot over time.
The two things that I had thought of so far were 162 columns (which I'm not particularly fond of, but it does make the queries simpler), or a food table that has a link to a nutrient_list table that then links to a static table with the nutrient name and units. The second seems more flexible i ncase my expectations are wrong, but I wouldn't even know where to begin on writing the queries for sums and time series.
Thanks
You should read up a bit on database normalization. Most of the normalization stuff is quite intuitive, but really going through the definition of the steps and seeing an example helps understanding the concepts and will help you greatly if you want to design a database in the future.
As for this problem, I would suggest you use 3 tables: one for the foods (let's call it foods), one for the nutrients (nutrients), and one for the specific nutrients of each food (foods_nutrients).
The foods table should have a unique index for referencing and the food's name. If the food has other data associated to it (maybe a link to a picture or a description), this data should also go here. Each separate food will get a row in this table.
The nutrients table should also have a unique index for referencing and the nutrient's name. Each of your 162 nutrients will get a row in this table.
Then you have the crossover table containing the nutrient values for each food. This table has three columns: food_id, nutrient_id and value. Each food gets 162 rows inside this table, oe for each nutrient.
This way, you can add or delete nutrients and foods as you like and query everything independent of programming language (well, using SQL, but you'll have to use that anyway :) ).
Let's try an example. We have 2 foods in the foods table and 3 nutrients in the nutrients table:
+------------------+
| foods |
+---------+--------+
| food_id | name |
+---------+--------+
| 1 | Banana |
| 2 | Apple |
+---------+--------+
+-------------------------+
| nutrients |
+-------------+-----------+
| nutrient_id | name |
+-------------+-----------+
| 1 | Potassium |
| 2 | Vitamin C |
| 3 | Sugar |
+-------------+-----------+
+-------------------------------+
| foods_nutrients |
+---------+-------------+-------+
| food_id | nutrient_id | value |
+---------+-------------+-------+
| 1 | 1 | 1000 |
| 1 | 2 | 12 |
| 1 | 3 | 1 |
| 2 | 1 | 3 |
| 2 | 2 | 7 |
| 2 | 3 | 98 |
+---------+-------------+-------+
Now, to get the potassium content of a banana, your'd query:
SELECT food_nutrients.value
FROM food_nutrients, foods, nutrients
WHERE foods_nutrients.food_id = foods.food_id
AND foods_nutrients.nutrient_id = nutrients.nutrient_id
AND foods.name = 'Banana'
AND nutrients.name = 'Potassium';
Use the second (more normalized) approach.
You could even get away with fewer tables than you mentioned:
tblNutrients
-- NutrientID
-- NutrientName
-- NutrientUOM (unit of measure)
-- Otherstuff
tblFood
-- FoodId
-- FoodName
-- Otherstuff
tblFoodNutrients
-- FoodID (FK)
-- NutrientID (FK)
-- UOMCount
It will be a nightmare to maintain a 160+ field database.
If there is a time element involved too (can measurements change?) then you could add a date field to the nutrient and/or the foodnutrient table depending on what could change.