I would like to export results of cross section dependence tests for 12 panel data sets to a table in order to compare them with similar tests done with different software. Below is the regression and test instruction example from the xtcsd help page (unfortunately the example dataset is not available but a similar example dataset tbl15-1.dta from the xttest2 page is available). The instruction below will help you understand what I'm trying to achieve:
use "http://fmwww.bc.edu/ec-p/data/Greene2000/TBL15-1.dta"
xtset firm year
xtreg i f c,fe
xtcsd, pesaran
To display the test statistic, I can use
return list
How do I acess the p-value for that statistic?
I have found how to export estimation results using the command esttab.
How do I export test results to a file in Stata?
Following #Maarten Buis's comment below on the p-value, here is how I exported test results to a csv file using the low level file access:
file open xtcsdfile using xtcsd.csv, write replace
file write xtcsdfile "pesaran,pvalue" _n
file write xtcsdfile (r(pesaran)) "," (2*(normal(-abs(r(pesaran))))) _n
file close xtcsdfile
The Pesaran statistic will (asymptotically) follow a standard normal distribution if the null-hypothesis is true, so: the p-value is 2*(normal(-abs(r(pesaran))))
Related
I am classifying iris data using DECISION TREE (C4.5), RANDOM FOREST and NAIVE BAYES. I am using the dataset downloaded from iris-train and iris-test. When I train the all networks everything is fine with proper results with 'classifier output', 'Detailed accuracy with class' and 'confusion matrix'. But, when I select the iris-test data in the Weka-explorer-classify-test options and select the iris-test file and in 'more options' select 'output prediction' as 'csv' and click start, I am getting the result as shown in the figure below. The 'classifier output' is showing the classified samples correctly, but, 'Detailed accuracy with class' and 'confusion matrix' is with all values zeros. Any suggestion where I am going wrong in selecting any option. Thank you.
The confusion matrix shows you how well your trained classifier performs by comparing the actual class of the instances in the test set with the class that was predicted by the classifier. But you are supplying a test set with no class information, so there's nothing to compare against. This is why you see
Total Number of Instances 0
Ignored Class Unknown Instances 120
in the output in your screenshot.
Typically you would first evaluate the performance of your classifier using cross-validation, or a test set that has class information. Then you can use the trained classifier to classify unknown data, for example using the Re-evaluate model on current test set right-click option as described in the help.
I'm running a long list of regressions in Stata. Results are exported using outreg2. At random points in time, the execution stops at some outreg2 with the error file handle __00000G not found. When I rerun the whole exercise it works after some tries. Do you have any idea what could be the reason?
My code looks as follows where further regressions of the same type follow.
xtreg l_GDP_capita i.year date_intens, fe r
outreg2 using year_DG_FE_gdp, excel append label addtext(DG FE, YES, YEAR FEs, YES) drop(i.year) ctitle(log GDP per capita, AP) replace
xtreg l_GDP_capita i.year date_intens if any_lez==1, fe r
outreg2 using year_DG_FE_gdp, excel append label addtext(DG FE, YES, YEAR FEs, YES) drop(i.year) ctitle(log GDP per capita, LEZ)
Two thoughts:
(1) your file name needs to be in quotations, so outreg2 using "year_DG_FE_gdp", [blah] instead of outreg2 using year_DG_FE_gdp, [blah].
(2) In case you haven't done so, you need to set a directory. cd "whatever filepath you want to save to" (or a global directory if you want to get fancy). You may have done this earlier in your code (I usually do it at the top of a .do file), but since you've only posted the snippet here I can't tell. Forgetting it is one of the classic problems people tend to have with outreg.
Not 100% sure, but I am having the same problem and I think this may arise when outreg2 is writing to a directory that is being synced (e.g. Dropbox, Google Drive). I believe the conflict may come from a permissions conflict where Stata is trying to change the file as the sync software is uploading the change. If you can pause the syncing of the directory while running the Stata commands, this seemed to help me.
Is there a simple way to export the "underlying" data of a Stata graph in order to reproduce that graph in MS Excel? Imagine you create a ROC curve using roctab y yhat, graph and you want to reproduce that graph in Excel.
I assume that you do not have access to the actual raw data that was used to compile the .gph in the first place, and somehow want to back engineer the .gph file... then, eek, good luck!
If you do however have the access to the data originally used then with new command available in Stata 13, You can use the function putexcel command
A more detailed description of the putexcel command can be found here stata press releasse on exporting tables to excel
The data in the .gph file are stored in the serset format between the and tags. There's no utility I know of that will parse the serset information, but it is very similar to Stata's dta file (v115 and below). I wrote up the basic file format information here. The Python library pandas has code for reading/writing dta files so with those you could probably create your own serset reader/writer.
Here is a sample program .do file, sampleprog.do:
program sampleprog
egen newVar = group (`1' `2')
end
How can I post it on my website (or dropbox), so that other people could install it to their Stata like this?
net from http://www.mywebsite.com/sampleprog.do
*** or may be like like this:
ssc install ...
I read the documentation about stata.toc...but I did not quite get it. What files should I upload and should it be one folder or what?
(PS: I definitely can simply email the .do file but this is not an option in my case.)
Here is a full explanation of how to share program or data files with others using your own website. I tried using Dropbox, but Stata 12 appears to have issues with https, which is the protocol for all Dropbox public links. If you want to use Dropbox, I recommend creating a shared folder that will sync on your collaborators' machines. The rest of this answer assumes you have a website serving pages over http or are using Stata 13, which supports https.
If this is a one-time thing, you can skip the rest of this answer by putting the file on your website and telling your collaborator to type:
. copy http://your-site.com/ado/program.ado program.ado
That will copy the ado file at the specified url into the user's current directory. If you want to provide information about your files, plan on sharing with multiple people and need to maintain/document a set files, read on!
Step 1 Create a folder on your website to hold the programs. I will call mine ado/
Step 2 Add the program files, help files, and data files you want to share. For this example, I have created a simple ado file called unique.ado with the following contents:
********************************************** unique.ado
capture program drop unique
program define unique
*! Count and number observations within group defined by varlist
* Example: unique person_id, obs(prow) tobs(pcount) sortby(time)
* to count and number rows by a variable called person_id
syntax varlist, obs(name) tobs(name) [sortby(varlist)]
bys `varlist' (`sortby') : gen long `obs' = _n
bys `varlist' (`sortby') : gen long `tobs' = _N
la var `obs' "Number of this row within `varlist' group."
la var `tobs' "Total number of rows with identical `varlist' values."
end
Step 3 Create a file called stata.toc to describe the files you wish to share. Here is mine:
********************************************** stata.toc
v 3
d Program to count observations by group
p unique [The unique.ado program for counting observations by group]
These files can be complicated. There are many features I won't cover here, but you can read this documentation to learn more.
Step 4 Create a package file for each of the packages defined by the lines in stata.toc that start with the letter p. Here is my package file for the unique package defined above:
********************************************** unique.pkg
v 3
d unique
d Program to count observations by group
d Distribution-Date: 28 June 2012
f unique.ado
Your directory now looks like this:
ado/
stata.toc
unique.ado
unique.pkg
Step 5 Use the site! Here are the commands to enter.
. net from http://example.com/ado/
. net describe unique
. net install unique
Here is what you'll see after entering the first command:
-----------------------------------------------------------------------------------
http://www.example.com/ado/
Program to count observations by group
-----------------------------------------------------------------------------------
PACKAGES you could -net describe-:
unique [The unique.ado program for counting observations by group]
-----------------------------------------------------------------------------------
The second command will tell you more about the package net describe unique:
---------------------------------------------------------------------------------------
package unique from http://www.example.com/ado
---------------------------------------------------------------------------------------
TITLE
unique
DESCRIPTION/AUTHOR(S)
Program to count observations by group
Distribution-Date: 28 June 2012
INSTALLATION FILES (type net install unique)
unique.ado
---------------------------------------------------------------------------------------
The third command will install the package net install unique:
checking unique consistency and verifying not already installed...
installing into /Users/cpoliquin/Library/Application Support/Stata/ado/plus/...
installation complete.
EDIT
See Nick's comments in the answer below. I intended this example to be simple and I don't expect other people to use this program. If you plan on submitting things to Stata Journal or SSC then his comments certainly apply! I hope this answer can serve as a decent tutorial for those confused by the official documentation.
This will be too long for a comment, so it is going to be an extra answer.
Your example uses the program name unique. If you search unique, all (or in Stata 13, search unique) you will find that a user-written program with the same name has been installed on SSC since 1998. This will create a clash of names for your users if (and only if) they attempt to use your program and also that earlier program. The more general advice is to search to see if a program name is already in use to try to avoid these problems.
Specifically, although you may just be using your unique as an arbitrary example, note that it contains bugs. An int doesn't contain enough bits to hold observation numbers exactly for large datasets. Also, as a matter of style, unique can change the sort order of your data, which is widely considered to be poor data management style.
Your example concerns dissemination of a program file without an accompanying help file. Suffice it to say that the SSC site would never accept such a program and the Stata Journal would not even review a paper based on such a submission before a help file was written to accompany it. Including explanatory comments with the code may be sufficient for your personal practices, but it falls below general Stata standards.
Stata 13 now supports https. See http://www.stata.com/manuals13/u.pdf, Section 3.6.
In short, I appreciate that you are trying to explain how to do something, but it is already well documented, and explicitly and implicitly some of your recommendations are below community standards.
I am trying to use weka to classify text. What I do is this:
I create on big ARFF file with all of the data: all_of_it.arff.
I split that data into training and test:train.arff and test.arff
I do feature selection on the training set and output a new training file:train_fs.arff
I build a classifier with only those selected features.
And the problem is.....
I don't quite know how to standardize the test set to only use the features I selected from the training set. Something like create new test file from test.arff according to train_fs.arff
*I tried using
java -cp weka.jar weka.filters.unsupervised.attribute.Standardize -b -i train_fs.arff -o train2.arff -r test.arff -s test2.arff
but I got the infamous Src and Dest differ in # of attributes.
Is there any way to normalize/standardize the sets according to an arff file (namely my new training data with few features) I don't see how to do this with the Standardize or StringToWordVector filter.
Batch filtering is one solution to your problem.
Pros:
It will apply the same filter to your test dataset as you apply to your training dataset. When you perform feature selection, the two datasets will be compatible
Cons:
It is only availabe from the command line interface or Weka's Java API
The two datasets must be filtered at the same time
You can read more about Batch filtering here.
You may also want to look into InputMappedClassifier. It is a wrapper classifier that addresses incompatible training and testing data.