How can I combine two rows to get one row?
![This is the example][1]
I believe you are looking for DATA UPDATE:
http://support.sas.com/documentation/cdl/en/basess/58133/HTML/default/viewer.htm#a001329151.htm
Example (not sure of your dataset names):
DATA dset3;
UPDATE dset1 dset2;
BY region;
RUN;
Related
I have two datasets in SAS. The first looks like this (let's say it is called data 1 (I'm only concerned with two columns of it)
...and the second dataset (let's say it is called data 2) looks like this:
...and I am trying to extract the second column of the first dataset and insert it into the second dataset, to achieve something that looks like this:
Basic Problem Description:
I am trying to extract two columns from a dataset in SAS and add them as rows to a second dataset. The variable names in the first dataset are in a column of their own (entitled 'variable name') and in the second dataset each variable is a column header (a variable in itself) with corresponding data. The images I provided are overly simplistic, as the actual data itself is very long.
Basically, I am trying to find functions in SAS which allow me to do this.
What I have tried
-I have tried to extract the first two columns as a table using proc sql, converted them to a data frame using a data step, sorted them, then used proc transpose to try to convert them from long to wide, then tried to use some sort of append function to tack them on to the second dataset, but append did not work.
-I have tried to merge the two sets, but the merge does not seem to work after using proc transpose.
-I have also tried transposing the second dataset and then merging them, which worked (for some reason) but then I was not able to transpose the data back (so that I can analyze it, which is my purpose in doing all of this).
What functions would I use to go about this process?
Apologies for not providing replicable data, I am more searching for recommendations for functions rather than a detailed hard solution.
To force PROC TRANSPOSE to use a variable as the source for the new variable names use the ID statement. So if you have this first dataset:
data tall;
input fruit $ count ##;
cards;
APPLE 1 PEACH 2 PEAR 2
;
You can use this code to convert it.
proc transpose data=tall out=wide;
id fruit;
var count;
run;
Then if you have another dataset that already has the variables APPLE, PEACH, PEAR etc then just set the two together.
data want;
set wide have ;
run;
I am new to SAS and wanted to ask you for the interpretation of the code for the two datasets that I have (sid fullsid):
data sid; set sid fullsid; run;
Can you please help me to understand what the code of line above does? Does it append the two datasets?
It is spelled out in the documentation.
Combining SAS Data Sets
Use a single SET statement with multiple data sets to concatenate the specified data sets. That is, the number of observations in the new data set is the sum of the number of observations in the original data sets, and the order of the observations is all the observations from the first data set followed by all the observations from the second data set, and so on.
Im looking do a probt for a column of values in sas not just one and to give two tailed p values.
I have the following code Id like to amend
data all_ssr;
x=.551447;
df=25;
p=(1-probt(abs(x),df))*2;
put p=;
run;
however I would like x to be a column of values within another file. I have tried work.ttest which is just a file of ttest values.
Many thanks
You need to use a set statement to access data from another SAS dataset.
data all_ssr;
set work.ttest; /*Dataset containing column of values*/
df=25;
p=(1-probt(abs(x),df))*2;
run;
Removing the put statement avoids clogging up the log.
I have a dataset structured with an ID and two other variables.
The id is not unique, it appears in the dataset more than 1 time (a patient could receive more than one clinical treatment).How can I drop the entire observation (the entire line) only if it is a perfect clone of a previous observation (based on the other two variable values)? I don't want to use an insanely long if statement.
Thanks.
proc sql;
select distinct * from olddata;
quit;
Sounds like an easy SQL fix. The select distinct option will remove any completely duplicate rows in a dataset if you select all columns.
If you specifically want to identify if two consecutive lines are identical (but are not looking to match identical lines separated by other lines), you can use notsorted on a by statement and then first and last variables.
data want;
set have;
by id var1 var2 notsorted;
if first.var2;
run;
That will keep the first record for any group of identical id/var1/var2, so long as they're consecutive on the dataset. Of course if you sort the dataset by id var1 var2 first this will always remove the duplicates, but not sorted this still works for removing consecutive pairs (or more) that are collocated.
I prefer #JJFord's answer, but for the sake of completeness, this can also be done using the nodupe option in proc sort:
proc sort data=mydata nodupe;
by id;
run;
What you choose as the by variable doesn't really matter here. The important bit is just to specify the nodupe option.
I'm trying to keep only the duplicate results for one column in a table. This is what I have.
proc sql;
create table DUPLICATES as
select Address, count(*) as count
from TEST_TABLE
group by Address
having COUNT gt 1
;
quit;
Is there any easier way to do this or an alternative I didn't think of? It seems goofy that I then have to re-join it with the original table to get my answer.
proc sort data=TEST_TABLE;
by Address;
run;
data DUPLICATES;
set TEST_TABLE;
by Address;
if not (first.Address and last.Address) then output;
run;
Using proc sort with nodupkey and dupout will dedupe the data and give you an "out" dataset with duplicate records from the original dataset, but the "out" dataset does not include EVERY record with the ID variable - it gives you the 2nd, 3rd, 4th...Nth. So you aren't comparing all the duplicate occurrences of the ID variable when you use this method. It's great when you know what you want to remove and define enough by variables to limit this precisely, or if you know that your records with duplicate IDs are identical in every way and you just want them removed.
When there are duplicates in a raw file I receive, I like to compare all records where ID has more than one occurrence.
proc sort data=test nouniquekeys
uniqueout=singles
out=dups;
by=ID;
run;
nouniquekeys deletes unique observations from the "out" DS
uniqueout=dsname stores unique observations
out=dsname stores remaining observations
Again, this method is great for working with messy raw data, and for debugging if your code might have produced duplicates.
That's easy using a data step:
proc sort data=TEST_TABLE nodupkey dupout=dups;
by Address;
run;
Refer to this documentation for further information
select field,count(field) from table
group by field having count(field) > 1