I have a quick question. In excel you can refer to the value in row 20, column c as the C20 cell. What is the equivalent expression in a SAS database?
It's not really useful to think of a SAS dataset like a spreadsheet. Rather, think about it like a database table. Extracting a particular row is easy, but extracting a column requires a name rather than a position, like C in Excel.
If this is the dataset
x1 x2 x3
+----+----+----
1 | 0 | 1 | 0
2 | 1 | 2 | 3
Then in a data step, you can get the equivalent of B2 like so:
data b2;
set dataset;
if _n_ = 2 then output;
keep x2;
run;
The output dataset will then contain only the value you want. But you have to know that x2, for example, is the variable you want.
This isn't really what SAS is for, though.
You cannot explicitly refer to a single cell external to a dataset out of context, like you can in Excel. SAS processes rows one at a time, and does not naturally have the ability to directly access a cell.
In general, if you're referring to the value in a discussion, you would refer to the column as a variable. You could refer to the row number, although that has very little meaning in most instances (particularly as you can sort a dataset, changing all of the row numbers); instead, you would refer to it by its primary key. This would be whatever defines a unique row in your data. It might be a subject ID, for example, or some combination of several variables that together define a unique row.
Related
Assuming that in Stata e.g. I have one stacked variable (in column 2) of stock returns with data populating range of 1 to 2000000 (some blanks are replaced with dots). How can I create another variable next to it, which will start at 1 and jump in the increments of one (1, 2, 3, 4...) all the way down to 2000000? I need this kind of variable to merge datasets. Advice would be much appreciated.
If it helps, if I was to use VBA, I would find the last row of the stacked column and then create a variable on this basis moving in the increments of one (that would be of course if Excel allowed 2 million rows)
gen long id = _n
will populate a variable with the observation number.
Note that you can merge on observation number. You don't need any identifier variable(s) to do it. In practice, I would almost always be very queasy about any merge not based on explicit identifiers, unless the datasets were visibly compatible (not so with 2 million observations).
I have two datasets in SAS. The first looks like this (let's say it is called data 1 (I'm only concerned with two columns of it)
...and the second dataset (let's say it is called data 2) looks like this:
...and I am trying to extract the second column of the first dataset and insert it into the second dataset, to achieve something that looks like this:
Basic Problem Description:
I am trying to extract two columns from a dataset in SAS and add them as rows to a second dataset. The variable names in the first dataset are in a column of their own (entitled 'variable name') and in the second dataset each variable is a column header (a variable in itself) with corresponding data. The images I provided are overly simplistic, as the actual data itself is very long.
Basically, I am trying to find functions in SAS which allow me to do this.
What I have tried
-I have tried to extract the first two columns as a table using proc sql, converted them to a data frame using a data step, sorted them, then used proc transpose to try to convert them from long to wide, then tried to use some sort of append function to tack them on to the second dataset, but append did not work.
-I have tried to merge the two sets, but the merge does not seem to work after using proc transpose.
-I have also tried transposing the second dataset and then merging them, which worked (for some reason) but then I was not able to transpose the data back (so that I can analyze it, which is my purpose in doing all of this).
What functions would I use to go about this process?
Apologies for not providing replicable data, I am more searching for recommendations for functions rather than a detailed hard solution.
To force PROC TRANSPOSE to use a variable as the source for the new variable names use the ID statement. So if you have this first dataset:
data tall;
input fruit $ count ##;
cards;
APPLE 1 PEACH 2 PEAR 2
;
You can use this code to convert it.
proc transpose data=tall out=wide;
id fruit;
var count;
run;
Then if you have another dataset that already has the variables APPLE, PEACH, PEAR etc then just set the two together.
data want;
set wide have ;
run;
I have a date table I make in a program with some calculated dates. There are 4 columns, with one row of dates.
referenceDate | startTwoMonth | startThreeMonth | startYear
-----------------------------------------------------------------
31Oct2015 | 01Sep2015 | 01Aug2015 | 01Nov2015
I would like to add these 4 columns to another table with many rows, and have these 4 date values occur in every row. (It makes it easier to do filtering later in the project.)
Currently, with a Query Builder step on my main data table, I use Add Tables to add the second date table above. Query builder says it cannot find a suitable join condition, which is correct, there isn't any. In the table list on the left, I grab all of the columns from the data table and the date table and put them into the Select Data area on the right. When I run the query, it gives me the output I want, but I get an error that the tables aren't joined and it could cause serious performance problems.
Is there a better way to do this?
Here's a data step that accomplishes the same thing:
data want;
set sashelp.class;
if _n_=1 then set single_row;
run;
You could add in the values as calculated columns in an SQL step.
For the calculation, you would have something like '32oct2015'd then give it a name of referenceDate.
To clarify, you're basically moving the logic from the reference table into the 'main' table.
I'm using Stata 12.0.
I have a CSV file of exposures for days of the year e.g. 01/11/2002 (DMY).
I want these imported into Stata and it to recognise that it is a date variable. I've been using:
insheet using "FILENAME", comma
But by doing this I am only getting the dates as labels rather than names of the variables. I guess this is because Stata doesn't allow variable names to start with numbers. I have tried to reformat the cells as Dates in Excel and import but then Stata thinks the whole column is a Date and changes the exposure data into dates.
Any advice on the best course of action is appreciated...
As commented elsewhere, I too think you probably have a dataset that is best formatted as panel data. However, I address first the specific problem I think you have according to your question. Then I show some code in case you are interested in switching to a panel structure.
Here is an example CSV file open as a spreadsheet:
And here the same file, open in a text editor. Imagine the ; are ,. This is related to my system's language settings.
Running this (substitute delimiter(";") for comma, in your case):
clear all
set more off
insheet using "D:\xlsdates.csv", delimiter(";")
results in
which I think is the problem you describe: dates as variable labels. You would like to have the dates as variable names. One solution is to use a loop and strtoname() to rename the variables based on the variable labels. The following goes after importing with insheet:
foreach var of varlist * {
local j = "`: variable l `var''"
local newname = strtoname("`j'", 1)
rename `var' `newname'
}
The result is
The function strtoname() will substitute out the ilegal characters for _'s. See help strtoname.
Now, if you want to work with a panel structure, one way would be:
clear all
set more off
insheet using "D:\xlsdates.csv", delimiter(";")
* Rename variables
foreach var of varlist * {
local j = "`: variable l `var''"
local newname = strtoname("`j'", 1)
rename `var' `newname'
}
* Generate ID
generate id = _n
* Change to long format
reshape long _, i(id) j(dat) string
* Sensible name
rename _ metric
* Generate new date variable
gen dat2 = date(dat,"DMY", 2050)
format dat2 %d
list, sepby(id)
As you can see, there's no need to do anything beforehand in Excel or in an editor. Stata seems to be enough in this case.
Note: I've reused code from http://www.stata.com/statalist/archive/2008-09/msg01316.html.
A further note on performance: A CSV file with 122 variables or days (columns) and 10,000 observations or subjects (rows) + 1 header row, will produce 1,220,000 observations after the reshape. I have tested this on some old machine with a 1.79 GHz AMD processor and 640 MB RAM and the reshape takes approximately 8 minutes. Stata 12 has a hard-limit of 2,147,483,647 observations (although available RAM determines if you can actually achieve it) and Stata SE of 32,767 variables.
There seems to be some confusion here between the names that variables may have, the values that variables may have and the types that they may have.
Thus, the statement "Stata doesn't allow variables to start with numbers" appears to be a reference to Stata's rules for variable names; if it were true, numeric variables would be impossible.
Stata has no variable (i.e. storage) type that is a date. Strictly, it has no concept of a date variable, but dates may be held as strings or numbers. Dates may be held as strings insofar as any text indicating a date is likely to be a string that Stata can hold. This is flexible, but not especially useful. For almost all useful work, dates need to be converted to integers and then assigned a display format that matches their content to be readable by people. Stata has various conventions here, e.g. that daily dates are held as integers with 0 meaning 1 January 1960.
It seems likely in your case that daily dates are being imported as strings: if so, the function date() (also known as daily()) may be used to convert to an integer date. The example here just uses the minimal default display format for daily dates: friendlier formats exist.
. set obs 1
obs was 0, now 1
. gen sdate = "12/03/12"
. gen ndate = daily(sdate, "DMY", 2050)
. format ndate %td
. l
+----------------------+
| sdate ndate |
|----------------------|
1. | 12/03/12 12mar2012 |
+----------------------+
If your variable names are being misread, as guessed by #ChrisP, you may need to tell us more. A short and concrete example is worth more than a longer verbal description.
I have written a macro to use proc univariate to calculate custom quantiles for variables in a dataset (say dsn1) %cust_quants(dsn= , varlist= , quant_list= ). The output is a summary dataset (say dsn2)that looks something like the following:
q_1 q_2.5 q_50 q_80 q_97.5 q_99 var_name
1 2.5 50 80 97.5 99 ex_var_1_100
-2 10 25 150 500 20000 ex_var_pos_skew
-20000 -500 -150 0 10 50 ex_var_neg_skew
What I would like to do is to use the summary dataset to cap/floor extreme values in the original dataset. My idea is to extract the column of interest (say q_99) and put it into a vector of macro-variables (say q_99_1, q_99_2, ..., q_99_n). I can then do something like the following:
/* create summary of dsn1 as above example */
%cust_quants(dsn= dsn1, varlist= ex_var_1_100 ex_var_pos_skew ex_var_neg_skew,
quant_list= 1 2.5 50 80 97.5 99);
/* cap dsn1 var's at 99th percentile */
data dsn1_cap;
set dsn1;
if ex_var_1_100 > &q_99_1 then ex_var_1_100 = &q_99_1;
if ex_var_pos_skew > &q_99_2 then ex_var_pos_skew = &q_99_2;
/* don't cap neg skew */
run;
In R, it is very easy to do this. One can extract sub-data from a data-frame using matrix like indexing and assign this sub-data to an object. This second object can then be referenced later. R example--extracting b from data-frame a:
> a <- as.data.frame(cbind(c(1,2,3), c(4,5,6)))
> print(a)
V1 V2
1 1 4
2 2 5
3 3 6
> a[, 2]
[1] 4 5 6
> b <- a[, 2]
> b[1]
[1] 4
Is it possible to do the same thing in SAS? I want to be able to assign a column(s) of sub-data to a macro variable / array, such that I can then use the macro / array within a 2nd data step. One thought is proc sql into::
proc sql noprint;
select v2 into :v2_macro separated by " "
from a;
run;
However, this creates a single string variable when what I really want is a vector of variables (or array--no vectors in SAS). Another thought is to add %scan (assuming this is inside a macro):
proc sql noprint;
select v2 into :v2_macro separated by " "
from a;
run;
%let i = 1;
%do %until(%scan(&v2_macro, &i) = "");
%let var_&i = %scan(&v2_macro, &i);
%let &i = %eval(&i + 1);
%end;
This seems inefficient and takes a lot of code. It also requires the programmer to remember which var_&i corresponds to each future purpose. Is there a simpler / cleaner way to do this?
**Please let me know in the comments if this is enough background / example. I'm happy to give a more complete description of why I'm doing what I'm attempting if needed.
First off, I assume you are talking about SAS/Base not SAS/IML; SAS/IML is essentially similar to R and has the same kind of operations available in the same manner.
SAS/Base is more similar to a database language than a matrix language (though has some elements of both, and some elements of an OOP language, as well as being a full-featured functional programming language).
As a result, you do things somewhat differently in order to achieve the same goal. Additionally, because of the cost of moving data in a large data table, you are given multiple methods to achieve the same result; you can choose the appropriate method for the required situation.
To begin with, you generally should not store data in a macro variable in the manner you suggest. It is bad programming practice, and it is inefficient (as you have already noticed). SAS Datasets exist to store data; SAS macro variables exist to help simplify your programming tasks and drive the code.
Creating the dataset "b" as above is trivial in Base SAS:
data b;
set a;
keep v2;
run;
That creates a new dataset with the same rows as A, but only the second column. KEEP and DROP allow you to control which columns are in the dataset.
However, there would be very little point in this dataset, unless you were planning on modifying the data; after all, it contains the same information as A, just less. So for example, if you wanted to merge V2 into another dataset, rather than creating b, you could simply use a dataset option with A:
data c;
merge z a(keep=v2);
by id;
run;
(Note: I presuppose an ID variable of some form to combine A and Z.)
This merge combines the v2 column onto z, in a new dataset, c. This is equivalent to vertically concatenating two matrices (although a straight-up concatenation would remove the 'by id;' requirement, in databases you do not typically do that, as order is not guaranteed to be what you expect).
If you plan on using b to do something else, how you create and/or use it depends on that usage. You can create a format, which is a mapping of values [ie, 1='Hello' 2='Goodbye'] and thus allows you to convert one value to another with a single programming statement. You can load it into a hash table. You can transpose it into a row (proc transpose). Supply more detail and a more specific answer can be provided.