I have a data set that has duplicate values of v1. I would like v2 values to be replaced by the first value of v2.
Data one;
v1 v2
1 20
1 23
1 21
2 36
3 51
4 44
4 20
I would like data=one to be changed to this:
Data one;
v1 v2
1 20
1 20
1 20
2 36
3 51
4 44
4 44
what procedure do I need to use?
A data step will do (assuming the data is already sorted the way you want):
data one;
set one;
by v1;
if first.v1
then keeper=v2;
else v2=keeper;
retain keeper;
drop keeper;
run;
Related
I would like to create a new column whose values equal the average of values in other columns. But the number of columns I am taking the average of is dictated by a variable. My data look like this, with 'length' dictating the number of columns x1-x5 that I want to average:
data have;
input ID $ length x1 x2 x3 x4 x5;
datalines;
A 5 8 234 79 36 78
B 4 8 26 589 3 54
C 3 19 892 764 89 43
D 5 72 48 65 4 9
;
run;
I would like to end up with the below where 'avg' is the average of the specified columns.
data want;
input ID $ length avg
datalines;
A 5 87
B 4 156.5
C 3 558.3
D 5 39.6
;
run;
Any suggestions? Thanks! Sorry about the awful title, I did my best.
You have to do a little more work since mean(of x[1]-x[length]) is not valid syntax. Instead, save the values to a temporary array and take the mean of it, then reset it at each row. For example:
tmp1 tmp2 tmp3 tmp4 tmp5
8 234 79 36 78
8 26 589 3 .
19 892 764 . .
72 48 65 4 9
data want;
set have;
array x[*] x:;
array tmp[5] _temporary_;
/* Reset the temp array */
call missing(of tmp[*]);
/* Save each value of x to the temp array */
do i = 1 to length;
tmp[i] = x[i];
end;
/* Get the average of the non-missing values in the temp array */
avg = mean(of tmp[*]);
drop i;
run;
Use an array to average it by summing up the array for the length and then dividing by the length.
data have;
input ID $ length x1 x2 x3 x4 x5;
datalines;
A 5 8 234 79 36 78
B 4 8 26 589 3 54
C 3 19 892 764 89 43
D 5 72 48 65 4 9
;
data want;
set have;
array x(5) x1-x5;
sum=0;
do i=1 to length;
sum + x(i);
end;
avg = sum/length;
keep id length avg;
format avg 8.2;
run;
#Reeza's solution is good, but in case of missing values in x it will produce not always desirable result. It's better to use a function SUM. Also the code is little simplified:
data want (drop=i s);
set have;
array a{*} x:;
s=0; nm=0;
do i=1 to length;
if missing(a{i}) then nm+1;
s=sum(s,a{i});
end;
avg=s/(length-nm);
run;
Rather than writing your own code to calculate means you could just calculate all of the possible means and then just use an index into an array to select the one you need.
data have;
input ID $ length x1 x2 x3 x4 x5;
datalines;
A 5 8 234 79 36 78
B 4 8 26 589 3 54
C 3 19 892 764 89 43
D 5 72 48 65 4 9
;
data want;
set have;
array means[5] ;
means[1]=x1;
means[2]=mean(x1,x2);
means[3]=mean(of x1-x3);
means[4]=mean(of x1-x4);
means[5]=mean(of x1-x5);
want = means[length];
run;
Results:
I want to give the value for some specific rows. I think showing it by example would be better. I have following datasheet;
Date Value
01/01/2001 10
02/01/2001 20
03/01/2001 35
04/01/2001 15
05/01/2001 25
06/01/2001 35
07/01/2001 20
08/01/2001 45
09/01/2001 35
My result should be:
Date Value Spec.Value
01/01/2001 10 1
02/01/2001 20 1
03/01/2001 35 1
04/01/2001 15 2
05/01/2001 25 2
06/01/2001 35 2
07/01/2001 20 3
08/01/2001 45 3
09/01/2001 35 3
As you can see, my condition value is 35. I have three 35. I need to group my date by using this condition value.
data want;
set have;
retain specvalue 1;
if lag(value) = 35 then do;
specvalue +1;
end;
run;
This is a SAS question. The following lines for two people are ordered by ascending AdmitNum. Ascending AdmitNum is based on ascending dates, which are omitted. Ages are provided for each AdmitNum. Age decreases between some of the observations. I don't want this to occur. Age must be equal or increase.
If the next age is less than the current age, then I want the current age to be written into the new variable NeedAge. In other words, retain the greater age while it is the greater age.
Person 2 has the wrong age, 43, in three rows. These should be 53. Person 2's age changes to 54 when AdmitNum=5 and this value, 54, should be retained.
After several attempts I have had only had partial success. Can someone suggest a way to make NeedAge as shown below? Thanks.
ID AdmitNum HaveAge NeedAge
1 1 51 51
1 2 48 51
1 3 51 51
1 4 49 51
2 1 53 53
2 2 43 53
2 3 43 53
2 4 43 53
2 5 54 54
data have;
input ID AdmitNum HaveAge;
datalines;
1 1 51
1 2 48
1 3 51
1 4 49
2 1 53
2 2 43
2 3 43
2 4 43
2 5 54
;
run;
data want;
set have;
by ID;
if _n_ = 1 NeedAge = HaveAge;
if HaveAge > NeedAge then NeedAge = HaveAge;
retain NeedAge;
run;
Check if HaveAge exceeds NeedAge, and if so, replace NeedAge with HaveAge. Then retain.
data have;
input ID AdmitNum HaveAge;
datalines;
1 1 51
1 2 48
1 3 51
1 4 49
2 1 53
2 2 43
2 3 43
2 4 43
2 5 54
;
run;
data want;
set have;
by ID;
if HaveAge > NeedAge then NeedAge = HaveAge;
retain NeedAge;
run;
Our university is forcing us to perform the old school chi square test using PROC FREQ (I am aware of the options with proc univariate).
I have generated one theoretical exponential distribution with Beta=15 (and written down the values laboriously), and I've generated 10000 random variables which have an exponential distribution, with beta=15.
I try to first enter the frequencies of my random variables (in each interval) via the datalines command:
data expofaktiska;
input number count;
datalines;
1 2910
2 2040
3 1400
4 1020
5 732
6 531
7 377
8 305
9 210
10 144
11 106
12 66
13 40
14 45
15 29
16 16
17 12
18 8
19 8
20 3
21 2
22 0
23 1
24 2
25 0
26 2
;
run;
This seems to work.
I then try to compare these values to the theoretical values, using the chi square test in proc freq (the one we are supposed to use)
As follows:
proc freq data=expofaktiska;
weight count;
tables number / testp=(0.28347 0.20311 0.14554 0.10428 0.07472 0.05354 0.03837 0.02749 0.01969 0.01412 0.01011 0.00724 0.0052 0.00372 0.00266 0.00191 0.00137 0.00098 0.00070 0.00051 0.00036 0.00026 0.00018 0.00013 0.00010 0.00007) chisq;
run;
I get the following error:
ERROR: The number of TESTP values does not equal the number of levels. For the table of number,
there are 24 levels and 26 TESTP values.
This may be because two intervals contain 0 obervations. I don't really see a way around this.
Also, I don't get the chi square test in the results viewer, nor the "tes probability", I only the frequency/cumulative frequency of the random variables.
What am I doing wrong? Do both theoretical/actual distributions need to have the same form (probability/frequencies?)
We are using SAS 9.4
Thanks in advance!
/Magnus
You need ZEROS options on the WEIGHT statement.
data expofaktiska;
input number count;
datalines;
1 2910
2 2040
3 1400
4 1020
5 732
6 531
7 377
8 305
9 210
10 144
11 106
12 66
13 40
14 45
15 29
16 16
17 12
18 8
19 8
20 3
21 2
22 0
23 1
24 2
25 0
26 2
;
run;
proc freq data=expofaktiska;
weight count / zeros;
tables number / testp=(0.28347 0.20311 0.14554 0.10428 0.07472 0.05354 0.03837 0.02749 0.01969 0.01412 0.01011 0.00724 0.0052 0.00372 0.00266 0.00191 0.00137 0.00098 0.00070 0.00051 0.00036 0.00026 0.00018 0.00013 0.00010 0.00007) chisq;
run;
I have been searching the solution a while, but I couldn't find any similar question in SAS in communities. So here is my question: I have a big SAS table: let's say with 2 classes and 26 variables:
A B Var1 Var2 ... Var25 Var26
-----------------------------
1 1 10 20 ... 35 30
1 2 12 24 ... 32 45
1 3 20 23 ... 24 68
2 1 13 29 ... 22 57
2 2 32 43 ... 33 65
2 3 11 76 ... 32 45
...................
...................
I need to calculate the cumulative sum of the all 26 variables through the Class=B, which means that for A=1, it will accumulate through B=1,2,3; and for A=2 it will accumulate through B=1,2,3. The resulting table will be like:
A B Cum1 Cum2 ... Cum25 Cum26
-----------------------------
1 1 10 20 ... 35 30
1 2 22 44 ... 67 75
1 3 40 67 ... 91 143
2 1 13 29 ... 22 57
2 2 45 72 ... 55 121
2 3 56 148 .. 87 166
...................
...................
I can choose the hard way, like describing each of 26 variables in a loop, and then I can find the cumulative sums through B. But I want to find a more practical solution for this without describing all the variables.
On one of the websites was suggested a solution like this:
proc sort data= (drop=percent cum_pct rename=(count=demand cum_freq=cal));
weight var1;
run;
I am not sure if there is any option like "Weight" in Proc Sort, but if it works then I thought that maybe I can modify it by putting numeric instead of Var1, then the Proc Sort process can do the process for all the numerical values :
proc sort data= (drop=percent cum_pct rename=(count=demand cum_freq=cal));
weight _numerical_;
run;
Any ideas?
One way to accomplish this is to use 2 'parallel' arrays, one for your input values and another for the cumulative values.
%LET N = 26 ;
data cum ;
set have ;
by A B ;
array v{*} var1-var&N ;
array c{*] cum1-cum&N ;
retain c . ;
if first.A then call missing(of c{*}) ; /* reset on new values of A */
do i = 1 to &N ;
c{i} + v{i} ;
end ;
drop i ;
run ;