I have a set of plants (A, B, C) that may act both as senders or as receivers but in practice, not all are actually sending or receiving. I need to fill in the missing connections to make the data matrix "square" (or "quadratic") as opposed to rectangularizing it.
Here is my data:
clear
input str1 sender str1 receiver value
A B 100
A C 200
B A 100
end
Stata's fillin command almost does what I want:
fillin sender receiver
drop if sender == receiver
list
+-------------------------------------+
| sender receiver value _fillin |
|-------------------------------------|
1. | A B 100 0 |
2. | A C 200 0 |
3. | B A 100 0 |
4. | B C . 1 |
+-------------------------------------+
Below is the output I expect:
+-----------------------------+
| sender receiver value |
|-----------------------------|
1. | A B 100 |
2. | A C 200 |
3. | B A 100 |
4. | B C . |
5. | C A . |
6. | C B . |
+-----------------------------+
Is there a simple way of doing this?
This is a step more general than #Pearly Spencer's solution.
clear
input str1 sender str1 receiver value
A B 100
A C 200
B A 100
end
egen tag = tag(receiver)
local N = _N
expand 2 if tag
replace sender = receiver if _n > `N'
replace value = . if _n > `N'
fillin sender receiver
drop if sender == receiver
list, sepby(sender)
+-------------------------------------------+
| sender receiver value tag _fillin |
|-------------------------------------------|
1. | A B 100 1 0 |
2. | A C 200 1 0 |
|-------------------------------------------|
3. | B A 100 1 0 |
4. | B C . . 1 |
|-------------------------------------------|
5. | C A . . 1 |
6. | C B . . 1 |
+-------------------------------------------+
You need to provide Stata with the missing piece of information and then apply fillin:
clear
input str1 sender str1 receiver value
A B 100
A C 200
B A 100
end
set obs 4
replace sender = "C" in 4
replace receiver = "A" in 4
fillin sender receiver
drop if sender == receiver
list, separator(0)
+-------------------------------------+
| sender receiver value _fillin |
|-------------------------------------|
1. | A B 100 0 |
2. | A C 200 0 |
3. | B A 100 0 |
4. | B C . 1 |
5. | C A . 0 |
6. | C B . 1 |
+-------------------------------------+
Related
I have a table in SAS dataset that looks like this:
proc sql;
create table my_table
(id char(1),
my_date num format=date9.,
my_col num);
insert into my_table
values('A','01JAN2010'd,.)
values('A','02JAN2010'd,0)
values('A','03DEC2009'd,1)
values('A','04NOV2009'd,1)
values('B','01JAN2010'd,.)
values('B','02NOV2009'd,2)
values('C','01JAN2010'd,.)
values('C','02OCT2009'd,3)
values('D','01JAN2010'd,.)
values('D','02NOV2009'd,2)
values('D','03OCT2009'd,1)
values('D','04AUG2009'd,2)
values('D','05MAY2009'd,3)
values('D','06APR2009'd,1);
quit;
I am trying to create a new column desired that, for each group of id column, flags the row with a value of 1 if the value in my_col is missing or less than 3.
The part I'm having trouble with is that when there is a my_col value that is greater than 2, I need the desired value for that row to be missing and also stop flagging any remaining rows in the id group with a value of 1.
The resulting dataset should look like this:
+----+-----------+--------+---------+
| id | my_date | my_col | desired |
+----+-----------+--------+---------+
| A | 01JAN2010 | . | 1 |
| A | 02JAN2010 | 0 | 1 |
| A | 03DEC2009 | 1 | 1 |
| A | 04NOV2009 | 1 | 1 |
| B | 01JAN2009 | . | 1 |
| B | 02NOV2009 | 2 | 1 |
| C | 01JAN2010 | . | 1 |
| C | 02OCT2009 | 3 | . |
| D | 01JAN2010 | . | 1 |
| D | 02NOV2009 | 2 | 1 |
| D | 03OCT2009 | 1 | 1 |
| D | 04AUG2009 | 2 | 1 |
| D | 05MAY2009 | 3 | . |
| D | 06APR2009 | 1 | . |
+----+-----------+--------+---------+
Looks like a simple application of a retained variable. Set the flag to 1 when you start a new group and then set it to missing when the value of MY_COL is larger than 2.
data want;
set my_table ;
by id;
if first.id then desired=1;
if my_col>2 then desired=.;
retain desired;
run;
Also it is not clear why you used such complicated code to create your example data. Why not a simple data step?
data my_table;
input id :$1. my_date :date. my_col;
format my_date date9.;
cards;
A 01JAN2010 .
A 02JAN2010 0
A 03DEC2009 1
A 04NOV2009 1
B 01JAN2010 .
B 02NOV2009 2
C 01JAN2010 .
C 02OCT2009 3
D 01JAN2010 .
D 02NOV2009 2
D 03OCT2009 1
D 04AUG2009 2
D 05MAY2009 3
D 06APR2009 1
;
I can't think of a simpler way to do it, but this works. You will need to have your data sorted by id.
data my_table2;
set my_table;
by id;
format gt2flag $1.;
retain gt2flag;
if first.id then gt2flag='';
if my_col gt 2 then gt2flag='Y';
if gt2flag = 'Y' then desired=.;
else desired=1;
drop gt2flag;
run;
id my_date my_col desired
A 01JAN2010 . 1
A 02JAN2010 0 1
A 03DEC2009 1 1
A 04NOV2009 1 1
B 01JAN2010 . 1
B 02NOV2009 2 1
C 01JAN2010 . 1
C 02OCT2009 3 .
D 01JAN2010 . 1
D 02NOV2009 2 1
D 03OCT2009 1 1
D 04AUG2009 2 1
D 05MAY2009 3 .
D 06APR2009 1 .
I'm trying to figure out how to calculate if start time for each subject occurs within 1 hour of each other. However I only have one column and two groups with two different dates for each. I have no comparative variable to a dhms time difference as they occur under the same column variable. I have thought of doing a lag on the first time and then an intchk to calculate the 24 hour time difference between each but I don't think i have sufficient arguments for the intchk function. Alternatively could maybe do a proc transpose and then do a timediff between each array variable but that seems messy. Anyone have less clunky and more efficient solutions as i might be overthinking this.
Sample Data:
+----------+-------+------+------------+------------+
| CLIENTID | GRPID | date | start_date | start_time |
+----------+-------+------+------------+------------+
| 2 | 1 | -2 | 10Nov2019 | 23:19:52 |
| 3 | 1 | -2 | 10Nov2019 | 23:22:51 |
| 4 | 1 | -2 | 10Nov2019 | 23:20:16 |
| 5 | 1 | -2 | 10Nov2019 | 23:21:30 |
| 6 | 1 | -2 | 10Nov2019 | 23:23:51 |
| 23 | 2 | -2 | 11Nov2019 | 23:11:38 |
| 24 | 2 | -2 | 11Nov2019 | 23:38:33 |
| 25 | 2 | -2 | 11Nov2019 | 23:15:01 |
| 26 | 2 | -2 | 11Nov2019 | 23:08:43 |
+----------+-------+------+------------+------------+
You can compile the start date and time into a temporary datetime variable (_start_dt) to ease the comparison. Then, taking the first datetime for each GRPID as the baseline, you could use a RETAIN statement to pass that baseline datetime (_base_dt) down the related data rows and find the time difference (time_diff) using the INTCK function with a dtsecond interval.
proc sort data=your_data;
by grpid clientid;
run;
data your_results (drop=_:);
retain CLIENTID GRPID DATE start_date start_time _base_dt;
format _base_dt _start_dt datetime16. time_diff time8.;
set your_data;
by grpid clientid;
_start_dt = dhms(start_date,hour(start_time),minute(start_time),second(start_time));
if first.grpid then _base_dt = _start_dt;
time_diff = intck('dtsecond', _base_dt, _start_dt);
run;
This gives the following results dataset:
+----------+-------+------+------------+------------+-----------+
| CLIENTID | GRPID | date | start_date | start_time | time_diff |
+----------+-------+------+------------+------------+-----------+
| 2 | 1 | -2 | 10Nov2019 | 23:19:52 | 00:00:00 |
| 3 | 1 | -2 | 10Nov2019 | 23:22:51 | 00:02:59 |
| 4 | 1 | -2 | 10Nov2019 | 23:20:16 | 00:00:24 |
| 5 | 1 | -2 | 10Nov2019 | 23:21:30 | 00:01:38 |
| 6 | 1 | -2 | 10Nov2019 | 23:23:51 | 00:03:59 |
| 23 | 2 | -2 | 11Nov2019 | 23:11:38 | 00:00:00 |
| 24 | 2 | -2 | 11Nov2019 | 23:38:33 | 00:26:55 |
| 25 | 2 | -2 | 11Nov2019 | 23:15:01 | 00:03:23 |
| 26 | 2 | -2 | 11Nov2019 | 23:08:43 | -0:02:55 |
+----------+-------+------+------------+------------+-----------+
I think I’ve interpreted your requirements correctly.. Let me know if not.
It sounds like you want to check if the RANGE of the start_time over each group is < 1 hour:
Coerce the start_date to a datetime value and add the start_time before computing the range.
data have;
input
CLIENTID GRPID date start_date: date9. start_time: hhmmss6.;
format start_date date9. start_time time8.;
datalines;
2 1 -2 10Nov2019 23:19:52
3 1 -2 10Nov2019 23:22:51
4 1 -2 10Nov2019 23:20:16
5 1 -2 10Nov2019 23:21:30
6 1 -2 10Nov2019 23:23:51
23 2 -2 11Nov2019 23:11:38
24 2 -2 11Nov2019 23:38:33
25 2 -2 11Nov2019 23:15:01
26 2 -2 11Nov2019 23:08:43
run;
proc sql;
create table want (label="start range status by group") as
select
grpid,
range(dhms(start_date,0,0,0)+start_time) as start_range format time8.,
calculated start_range < '24:00:00't as one_hr_start_flag
from have
group by grpid;
If you want to disregard the groups and focus only on the time of day, disregarding the date, the range computation would be:
* Presuming 'noon' is the center of the day;
proc sql;
create table want (label="time of day start range status overall") as
select
range(start_time) as range format time8.,
calculated range < '24:00:00't as one_hr_start_flag
from have;
Looking at only time is always troublesome for the cases of when the time value is slightly after midnight.
I have a model for which I want to perform a group-by on two values and calculate the percentages of each value per outer grouping.
Currently I just make a query to get all the rows and put them into a pandas dataframe and perform something similar to the answer here. Although this works I'm sure it would be more efficient if I could make the query return the information I require directly.
I am currently running Django 2.0.5 with a backend DB on PostgreSQL 9.6.8
I think window functions could be the solution as indicated here but I cannot construct a successful combination of annotate and values to give me the desired output.
Another possible solution could be rollup introduced in PostgreSQL 9.5 if I can find a way to get the summary row as a set of extra columns for each row? But I also think it's not yet supported by Django.
Model:
class ModelA(models.Model):
grouper1 = models.CharField()
grouper2 = models.CharField()
metric1 = models.IntegerField()
All rows:
grouper1 | grouper2 | metric1
---------+----------+---------
A | C | 2
A | C | 2
A | C | 2
A | D | 4
A | D | 4
A | D | 4
B | C | 5
B | C | 5
B | C | 5
B | D | 6
B | D | 4
B | D | 5
Desired output:
grouper1 | grouper2 | sum(metric1) | Percentage
---------+----------+--------------+-----------
A | C | 6 | 40
A | D | 12 | 60
B | C | 15 | 50
B | D | 15 | 50
I got close to what I expected with
ModelA.objects.all(
).values(
'grouper1',
'grouper2'
).annotate(
SumMetric1=Window(expression=Sum('metric1'), partition_by=[F('grouper1'), F('grouper2')]),
GroupSumMetric1=Window(expression=Sum('metric1'), partition_by=[F('grouper1')])
)
However this returns a row for every original row in the database like so:
grouper1 | grouper2 | sum(metric1) | Percentage
---------+----------+--------------+-----------
A | C | 6 | 40
A | C | 6 | 40
A | C | 6 | 40
A | D | 12 | 60
A | D | 12 | 60
A | D | 12 | 60
B | C | 15 | 50
B | C | 15 | 50
B | C | 15 | 50
B | C | 15 | 50
B | C | 15 | 50
B | D | 15 | 50
In this situation .distinct() might help.
More information is here.
I'm working with an edge list in Stata, of the type:
var1 var2
a 1
a 2
a 3
b 1
b 2
1 a
2 b
I want to remove non-unique pairs such as 1a and 2b (which are same as a1 and b2 for me). How can I go about this?
. clear
. input str1 (var1 var2)
var1 var2
1. a 1
2. a 2
3. a 3
4. b 1
5. b 2
6. 1 a
7. 2 b
8. end
. gen first = cond(var1 <= var2, var1, var2)
. gen second = cond(var1 <= var2, var2, var1)
. list
+------------------------------+
| var1 var2 first second |
|------------------------------|
1. | a 1 1 a |
2. | a 2 2 a |
3. | a 3 3 a |
4. | b 1 1 b |
5. | b 2 2 b |
|------------------------------|
6. | 1 a 1 a |
7. | 2 b 2 b |
+------------------------------+
. duplicates list first second
Duplicates in terms of first second
+--------------------------------+
| group: obs: first second |
|--------------------------------|
| 1 1 1 a |
| 1 6 1 a |
| 2 5 2 b |
| 2 7 2 b |
+--------------------------------+
. duplicates drop first second, force
Duplicates in terms of first second
(2 observations deleted)
. list
+------------------------------+
| var1 var2 first second |
|------------------------------|
1. | a 1 1 a |
2. | a 2 2 a |
3. | a 3 3 a |
4. | b 1 1 b |
5. | b 2 2 b |
+------------------------------+
The easy part of the answer is to use duplicates drop. But how to get the data so that 1 a and a 1 are seen to be duplicates? This is all documented here. We can sort the values in each observation so that (in this case) both sort to 1 a. The linked paper says much more, but that's the main idea, and cond() helps.
I would like to check if a value has appeared in some previous row of the same column.
At the end I would like to have a cumulative count of the number of distinct observations.
Is there any other solution than concenating all _n rows and using regular expressions? I'm getting there with concatenating the rows, but given the limit of 244 characters for string variables (in Stata <13), this is sometimes not applicable.
Here's what I'm doing right now:
gen tmp=x
replace tmp = tmp[_n-1]+ "," + tmp if _n > 1
gen cumu=0
replace cumu=1 if regexm(tmp[_n-1],x+"|"+x+",|"+","+x+",")==0
replace cumu= sum(cumu)
Example
+-----+
| x |
|-----|
1. | 12 |
2. | 32 |
3. | 12 |
4. | 43 |
5. | 43 |
6. | 3 |
7. | 4 |
8. | 3 |
9. | 3 |
10. | 3 |
+-----+
becomes
+-------------------------------+
| x | tmp |
|-----|--------------------------
1. | 12 | 12 |
2. | 32 | 12,32 |
3. | 12 | 12,32,12 |
4. | 43 | 3,32,12,43 |
5. | 43 | 3,32,12,43,43 |
6. | 3 | 3,32,12,43,43,3 |
7. | 4 | 3,32,12,43,43,3,4 |
8. | 3 | 3,32,12,43,43,3,4,3 |
9. | 3 | 3,32,12,43,43,3,4,3,3 |
10. | 3 | 3,32,12,43,43,3,4,3,3,3|
+--------------------------------+
and finally
+-----------+
| x | cumu|
|-----|------
1. | 12 | 1 |
2. | 32 | 2 |
3. | 12 | 2 |
4. | 43 | 3 |
5. | 43 | 3 |
6. | 3 | 4 |
7. | 4 | 5 |
8. | 3 | 5 |
9. | 3 | 5 |
10. | 3 | 5 |
+-----------+
Any ideas how to avoid the 'middle step' (for me that gets very important when having strings in x instead of numbers).
Thanks!
Regular expressions are great, but here as often elsewhere simple calculations suffice. With your sample data
. input x
x
1. 12
2. 32
3. 12
4. 43
5. 43
6. 3
7. 4
8. 3
9. 3
10. 3
11. end
end of do-file
you can identify first occurrences of each distinct value:
. gen long order = _n
. bysort x (order) : gen first = _n == 1
. sort order
. l
+--------------------+
| x order first |
|--------------------|
1. | 12 1 1 |
2. | 32 2 1 |
3. | 12 3 0 |
4. | 43 4 1 |
5. | 43 5 0 |
|--------------------|
6. | 3 6 1 |
7. | 4 7 1 |
8. | 3 8 0 |
9. | 3 9 0 |
10. | 3 10 0 |
+--------------------+
The number of distinct values seen so far is then just a cumulative sum of first using sum(). This works with string variables too. In fact this problem is one of several discussed within
http://www.stata-journal.com/sjpdf.html?articlenum=dm0042
which is accessible to all as a .pdf. search distinct would have pointed you to this article.
Becoming fluent with what you can do with by:, sort, _n and _N is an important skill in Stata. See also
http://www.stata-journal.com/sjpdf.html?articlenum=pr0004
for another article accessible to all.