Using a JSL script, I would like to extract the covariance matrix of a nonlinear model.
I have a 4PL curve. But when I request:
m["Logistic 4P"]["Parameter Estimates"]["Covariance of Estimates"]["Reference"][""];
It is said that it is an outlinebox and therefore can't be converted into a Data Table, nor a matrix.
However, while right clicking on it, I can convert it in both, so it must be possible using JSL.
Any ideas?
Ok finaly found it, If it can help someone:
m["Logistic 4P"]["Parameter Estimates"]["Covariance of Estimates"]["Reference"][1] << make into data table;
Related
I have a number of matrixes such as:
I would like to transform this data in such way, that it is resembled in one column, such as:
Here is the link to a testsheet with exactly this data: https://docs.google.com/spreadsheets/d/1kn6yYL3dsTZ7IL5Z8a5j0itfwZK9TkRQsBCzXdCG2zo/edit#gid=0
What I have done so far: I know, that I can do this using "transpose" or "mtrans" (same thing). But unfortunately for this I will have to do it manually. I'm looking for just ONE formular to solve this problem, so I don't have to always do it manually for every row.
use:
=ARRAYFORMULA(SPLIT(TRANSPOSE(SPLIT(QUERY(TRANSPOSE(QUERY(TRANSPOSE(
IF(MONATLICH!R3:Y<>""; "♠"&MONTH(MONATLICH!B3:B)&"♦"&MONATLICH!B3:B&"♦"&
MONATLICH!C3:C&"♦"&MONATLICH!R2:Y2&"♦"&MONATLICH!R3:Y&"♦"&MONATLICH!Z3:AG; ))
;;999^99));;999^99); "♠")); "♦"))
spreadsheet demo
I run SegNet on my own dataset (by Segnet tutorial). I see great results via test_segmentation.py.
my problem is that I want to see the real net results and not test_segmentation own colorisation (via classes).
for example, if I have trained net with 2 classes, so after the train I will see not only 2 colors (as we see with the classes), but we will see the real net color segmentation ([0.22,0.19,0.3....) lighter and darker as the net see it]
I hope that I explained myself well. thanks for helping.
You could use a python script to achieve what you want. Take a look at this script.
The command out = out['argmax'], extracts the raw output, so you can get a segmentation map with 'lighter and darker' values as you wanted.
When you say the 'real' net color segmentation I will assume that you mean the probability maps. Effectively the last layer will have one map for every class; and if you check the function predict in inference.py, they take the argmax; that is the channel (which represents the class) with the highest probability. If you want to get these maps, you just have to get the data without computing the argmax; something like:
predicted = net.blobs['prob'].data
I solve it. the solution is to range cmin and cmax from 0 to 1 in the scipy saving method. for example: scipy.misc.toimage(output, cmin=0.0, amax=1).save(/path/.../image.png)
I am trying to find an example to assist me to cluster some textual data I have. The data is in the form:
A,B,3
C,D,5
A,D,57
The two first entries are the members of a pair, the number is how often this pair occurs in the dataset. I have over 200,000 unique pairs.
Any tips? Thanks!!
Don't use k-means on such data.
It will not work.
What you have is a similarity matrix, not continuous vectors as needed for k-means. You can try hierarchical clustering (with a sparse similarity, not a distance; no, I won't write the code for you).
I'm running a program in armadillo and save a cube object (equivalent to a 3-dimensional array in R) of doubles using the command mycube.save("mycube", arma_ascii). However I have not been able to load it properly in R.
What do you think would be the best format to use in order to load it in R?
A while back i stored matrices from a c++ program with:
m.save( "myMatrix.data" ,raw_ascii)
and read it in an R script with:
m <- as.matrix(read.table("myMatrix.data"))
This worked quite well. However, I'm not sure about saving cubes - you may need to split it in slices and re-assemble it in R.
We currently seem to have "half" of the needed support: only a wrap() method to return Cube objects to R.
So if someone were to contribute a working as<>() converter, we could (trivially) rely on R's (nice, binary, compressed, ...) serialization via e.g. the saveRDS() and loadRDS() functions.
It seems that the format raw_ascii can do the trick. It is not exactly an array of 3 dimensions but a matrix concatenating (row-wise) all the individual matrices which could then be manipulated.
I am learning how to do data mining and I am using this data set from UCI's website.
http://archive.ics.uci.edu/ml/datasets/Forest+Fires
The problem I am encountering is how to deal with the area class. My understanding from the description is that I need to apply ln(x+1) to area using AddExpression.
Am I going in the correct direction with this? Or are there other filters I should investigate? Thank you.
I try to answer your question based on the little information you provide. And I haven't worked with the forest-fires data set, but by inspection I see that the classifier attribute "area" often has the value 0. Maybe you can't simply filter out these rows with Area = 0. Your dataset might become too small, or whatnot.
I think you are asked to perform regression of some attribute(s) against "log(area)" in order to linearize it. However,when you try to calculate the log of the Area, values such as log(0) are a problem. values between 0 and 1 might also be problematic.
So a common fix is to add 1 to the value of "Area". This introduces a systematic error, but it is small, and it removes all 0-values, and you can still derive useful models from your log(x+1)-transformed dataset.
And yes, in Weka you do this by "Preprocess"/ AddExpression(x+1). This creates a new attribute. Then you might remove the old area attribute.
Of course, in interpreting your model, you should be aware of the transformation. If you just want to find out what the significant independent attributes are in your linear regression model, I'd say the transformation does not matter. The data points are just shifted a little bit.