How to interpret the PCA (fviz_pca_var)? - pca

I need someone to check on my analysis. I really appreciate if someone can help me... I made the following visualization from my data. The correlation coefficient between "Disch" and "precipitation" in my actual data is very low. However, it looks like to me they are highly correlated in this graph... in other words, does the cosine of the angle between 2 variables show the correlation between them? if so, then my "precipitation" and "disch" should be highly correlated... however, they are not... I am confused.correlation circle

Related

Dealing with imbalance dataset for multi-label classification

In my case, I’ve 33 labels per samples. The input label tensors for a corresponding image are like [0,0,1,0,1,1,1,0,0,0,0,0…...33]. And the samples for some labels are quite low and some are high. I'm looking for predict the regression values. So what will be the best approach to improve the prediction? I would like to apply data balancing technique. But so far I found the balancing technique available only for multi-class. I’m grateful to you if you share your best knowledge about regarding my problem or any other idea to improve the performance. Thanks in Advance.
When using a single.model to regress multiple values, it is usually beneficial to preprocess the predictions to be in roughly the same range.
Look for example on the way detection models predict (regress) bounding box coordinates: values are scaled and the net predicts only corrections.

Can one normalize a PCA for specific features?

When dealing with data sets that have hundreds of dimensions, some phenotypic and some metadata, I would like to "normalize" the effect of specific (multiple) features on PCAs.
I can get the contribution of specific features by plotting biplots, however, I would like to present the data where the effect of these has been considered and normalized for.
I tried normalizing by specific columns, but not sure this is the right way to go about this, or whether or not I can do this for multiple ones. I'm more of a newbie to dimension reduction; I feel like I'm missing something fundamental here. I'd appreciate any input. Thanks!

Find position of joint from speed sensor

I have a joint with speed encoder. I can measure the speed of the joint at any time. How can I extract the current position in C++?
I have idea in mind to find the area under the curve between two timestamps. Is this correct way to find the speed? Any other suggestion?
You have to integrate the speed value in order to get the position. So your guess is correct.
But that value will be unreliable as integration may deviate by large quantities.
Encoders should be able to provide position data themselves (counts). See if you can get them somehow. That would be the best way according to me.
Hope this helps.

Online Smoothing for Hand Tracking Data using Kalman Filters

I'm using Kinect with OpenNI/NITE. OpenNI can track human hands with the assistance of NITE. Also, OpenNI can smooth the tracked hand line, and I was trying to figure out how it does that.
I tried using Kalman filters, replaced the old hand position with the kalman estimated hand position, but still the smoother in OpenNI is much better.
I'd appreciate any clues on how to smooth online data or how to set the parameters in a Kalman filter (something specific to hand tracking, since I already know what the parameters do).
Using Kalman filter is not as easy as it seems. You need to choose a good motion model, a good state vector and a good measurement model. For your problem, as I guess you do 3d tracking of position, not orientation (x,y and z position of the hands on the screen) I would choose the following:
State vector =[x, y, z, v_x, v_y, v_z]
Update equations: (x,y,z) = (x,y,z)+ (v_x,v_y,v_z)*delta(t)
velocity would be constant
You also need to set the covariance of error properly, as this will model the error of choosing velocity to be constant (which is not true).
Check this paper. Have a look to the Jacobians needed for the predict and update equations of the filter. They are important. If you consider them identity, the filter will work, but it will only work with accuracy if you choose properly the Jacobians W (multiplies Q), H and A. Q and R are diagonal, try to give values experimentaly.
Hope this helps, good luck.
Here There is a simple example that shows how to set parameters of a kalman filter.
Thee example represents a simple way to test the different smoothed output visually. Checking the comments also helped me to understand the different parameters (noise, motion model, initialization, etc).
Hope it helps, it works quite well, and the code is simple to understand.
It uses the opencv implementation.
Hope it helps!

3D reconstruction C++ with OpenCV..Fundamental Matrix too large

Ok I am posting my conundrums of life to stackoverflow after 4 days of mindless programming when nothing seems to get things right or atleast close to right. sorry for being a little dramatic but I feel like a lousy programmer today.
Anyway, my problem is:
To obtain Fundamental matrix using RANSAC (N>8).
I have two images with wide baseline but sufficient overlap so that adequate amount of SURF keypoints (~308) are matched correctly (i plot them).
Now lies the problem. I pass the 2D points to cv::findFindamentalMat but I get completly baseless results. The function returns:
FundMat=[2.05148e-13 3.72341 -2.03671e+10
1.6701e+26 -4.17712 4.59533e+29
3.32414e+18 2.8843 1.91069e-26]
To circumvent the large dynamic range of the matrix, Hartley suggested to normalise the data points (in euclidean space and not the projection space normalization)....Even after doing that the result is the almost the same. (10^-9 to 10^9)
I understand that FundMat is accurate only upto scale but a difference of 10^-9 to 10^+9 is too much.
I referred to other questions here but i dont seem to get any leads:findfundamentalmatrix-doesnt-find-fundamental-matrix
how-to-calculate-the-fundamental-matrix-for-stereo-vision
Any ideas would be great. This is a very important step when considering uncalibrated images for the rest of the software pipeline.
n case the code is helpful. (its not indented and colored though..space is too less here.)
https://sites.google.com/site/3drecon124/
its solved...silly human error. there was a data type conversion from double to float and it caused data to be fetched from incorrect locations in memory. now its smooth and epipolar constraint is satisfied upto scale.