PVLIB decomposing for standard components - pvlib

My question is about the PVlib library. We download some data using forecast.py. Not sure that this file does clear sky scaling/decomposing on the GHI/DNI/DHI. Then it finally gives us timeseries data. As next step, we calculate irradiance using these standard components. Here,
Do we need clear sky decomposition again? Or
Using these ghi/dni/dhi values, is it enough to calculate irradiance correctly? For example,
poa = irradiance.get_total_irradiance(
surface_tilt
surface_azimuth
solar_zenith
solar_azimuth
ghi
dni
dhi
dni_extra
albedo
surface_type
model ='perez',
model_perez ='allsitescomposite1990')
Thank you so much.

Related

How to make image comparison in openCV more coarse

I am writing a code on raspberry pi in python to compare two images using mean squared error. The project is an personal home security thing.
My main goal is to detect a change between the images that I capture from pi camera(if something is added to the current image or something removed from the image) but right now my code is too sensitive. It is affected by change in background lighting, which I do not want.
I have two options in front of me, to either scrape my current logic and start a new one or improve my current logic to account for these noise(if I can call them that). I am searching for ways to improve my logic but I wanted some guidance on how to go about it.
My biggest fear being, am I wasting time kicking a dead horse or should I just look for some other algorithm to detect a change in image or should I use edge detection
import numpy as np
import cv2
import os
from threading import Thread
######Function Definition########################################
def mse(imageA, imageB):
# the 'Mean Squared Error' between the two images is the
# sum of the squared difference between the two images;
# NOTE: the two images must have the same dimension
err = np.sum((imageA.astype("int") - imageB.astype("int")) ** 2)
err /= int(imageA.shape[0] * imageA.shape[1])
# return the MSE, the lower the error, the more "similar"
# the two images are
return err
def compare_images(imageA, imageB):
# compute the mean squared error
m = mse(imageA, imageB)
print(m)
def capture_image():
##shell command to click photos
os.system(image_args)
##original image Path variable
original_image_path= "/home/pi/Downloads/python-compare-two-images/originalimage.png"
##original_image_args is a shell command to click photos
original_image_args="raspistill -o "+original_image_path+" -w 320 -h 240 -q 50 -t 500"
os.system(original_image_args)
##read the greyscale of the image in to the variable original_image
original_image=cv2.imread(original_image_path, 0)
##Three images
image_args="raspistill -o /home/pi/Downloads/python-compare-two-images/Test_Images/image.png -w 320 -h 240 -q 50 --nopreview -t 10 --exposure sports"
image_path="/home/pi/Downloads/python-compare-two-images/Test_Images/"
image1_name="image.png"
#created a new thread to take pictures
My_Thread=Thread(target=capture_image)
#Thread started
My_Thread.start()
flag = 0
while(True):
if(My_Thread.isAlive()==True):
flag=0
else:
flag=1
if(flag==1):
flag=0
image1 = cv2.imread((image_path+image1_name), 0)
My_Thread=Thread(target=capture_image)
My_Thread.start()
compare_images(original_image, image1)
A first improvement is to adjust a gain to compensate for the global variation of the light. Like taking the average intensity of the two images and correcting one with the ratio of the intensities.
This can fail in case of an change of the foreground, which will influence the global average. If that change in the foreground doesn't have a too large area, you can get an estimate by robust fitting of a linear model y = a.x.
A worse, but unfortunately common, scenario, is when the background illumination changes in a non-uniform way. A partial solution is to try and fit a non-uniform gain model such as one obtained by bilinear interpolation between gains estimated at the corners, or a finer subdivision of the image.
The topic of change detection is a very studied field. One of the basic options is to model each one of the pixels as a Gaussian distribution by sampling a lot of images for each pixel and calculate the mean and variance of each pixel.
For the pixels that tend to change when there is change in lighting the variance of the pixels will be bigger than the ones that don't change as much.
In order to detect movement for a certain pixel you just need to choose what is the probability you consider as an unordarinry change in the pixel value and use the Gaussain distribution you calculated to find what is the corresponding value that is considered unordarinry.
To make this solution efficient for your raspberry pi you will need to first do an "offline" calculation of the values for each pixel that will be the threshold values for which the change in the pixel value is considered movement and store them in a file and than in the "online" sage you will just compare each pixel to the calculated value.
For the "offline" stage i recommend using images that were recorder during the entire day in order to get all the variation you need per pixel. This stage of curse can be done on your computer and only the output file will be uploaded to the raspberry pi

Why two .wav that should have the same pitch don't

This is for a python computational physics class. We are given two .wav files that contain files of a harp and a piano playing the same note. We are supposed to "load the files and take the FFT of the amplitude. From the FFT determine the frequency of the fundamental for both instruments to 4 sig figs."
Here is what I have done.
import scipy.io.wavfile as sciwav
import matplotlib.pyplot as plt
#import data from .wav file. This function returns the sampling rate and the data in an array.
harp_rate,harp_data=sciwav.read('/Users/williamweiss2/Desktop/Test2/harp.wav',mmap=False)
piano_rate,piano_data=sciwav.read('/Users/williamweiss2/Desktop/Test 2/piano.wav',mmap=False)
#perform the FFT on both sets of data and graph to find the index of the first harmonic.
plt.figure(1)
p=rfft(piano_data)
h=rfft(harp_data)
plt.subplot(121)
plot(abs(p),'b')
title('Piano FFT')
xlim(0,100000)
plt.subplot(122)
plot(abs(h),'g')
title('Harp FFT')
This all works just fine. Now, to find freq. of note played this is what I was taught to do.
x value of first spike in FFT graph = Index.
deltaF = Sampling Rate / # of samples.
Index * deltaF = Freq. of note played.
I followed these steps and got two drastically different notes. Does anyone see a misstep in my process? Any ideas are appreciated even if they go over my head. I am just a junior getting a Physics degree. Thanks very much in advance.

Interactive pcolor in python

I have two numpy arrays of sizes (Number_of_time_steps, N1, N2). Each one represents velocities in a plane of size N1xN2 for Number_of_time_steps which is 12,000 in my case. These two arrays come from two fluid dynamics simulations in which a point is slightly perturbed at time 0 and I want to study the discrepancies caused by the perturbation in the velocity of each point in the grid. To do so, for each time step, I make a plot with 4 subplots: pcolor map of plane 1, pcolor map of plane 2, difference between the planes, and difference between the planes in log scale. I use matplotlib.pyplot.pcolor to create each subplot.
This is something that can be easily done, but the problem is that I will end up with 12,000 of such plots (saved as .png files on the disk). Instead, I want a kind of interactive plot in which I can enter the time step, and it will update the 4 subplots to the corresponding time step from the values in the two existing arrays.
If somebody has any idea on how to solve this problem, be happy to hear about it.
For interactive graphics, you should look into Bokeh:
http://docs.bokeh.org/en/latest/docs/quickstart.html
You can create a slider that will bring up the time slices you want to see.
If you can run from within ipython, you could just make a function to plot your given timestep
%matplotlib # set the backend
import matplotlib.pyplot as plt
fig,((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex='col', sharey='row')
def make_plots(timestep):
# Clear the subplots
[ax.cla() for ax in [ax1,ax2,ax3,ax4]]
# Make your plots. Add whatever options you need
ax1.pcolor(array1[timestep])
ax2.pcolor(array2[timestep])
ax3.pcolor(array1[timestep]-array2[timestep])
ax4.pcolor(array1[timestep]-array2[timestep])
# Make axis labels, etc.
ax1.set_xlabel(...) # etc.
# Update the figure
fig.show()
# Plot some timesteps like this
make_plots(0) # time 0
# wait some time, then plot another
make_plots(100) # time 100
make_plots(12000) # time 12000

Scikit-learn RandomForestClassifier() feature selection, just select the train set?

I'm using scikit-learn for machine learning.
I have 800 samples with 2048 features, therefore I want to reduce my features to get hopefully a better accuracy.
It is a multiclass problem (class 0-5), and the features consists of 1's and 0's: [1,0,0,0,1,1,1,1,1,0,0,0,0,0,0,0,0....,0]
I'm using the ensemble method, RandomForestClassifier().
Should I just feature select the training data ?
Is it enough if I'm using this code:
X_train, X_test, y_train, y_test = train_test_split( X, y, test_size = .3 )
clf = RandomForestClassifier( n_estimators = 200,
warm_start = True,
criterion = 'gini',
max_depth = 13
)
clf.fit( X_train, y_train ).transform( X_train )
predicted = clf.predict( X_test )
expected = y_test
confusionMatrix = metrics.confusion_matrix( expected, predicted )
Cause the accuracy didn't get higher. Is everything ok in the code or am I doing something wrong?
I'll be very grateful for your help.
I'm not sure I understood your question correctly so I'll answer to what I thought I understood =)
First, reducing the dimension of your features (from 2048 to 500 e.g.) might not provide you with better results. It all depends on the capacity of your model to catch the geometry of your data. You can get much better results for example with a linear model if you reduce dimension through non-linear methods that would catch a particular geometry and 'linearize' it, instead of directly using this linear model on the raw data. But this is because your data would intrinsicaly be non-linear and the linear model is not good therefore in the original space to catch this geometry (think of a circle in 2D).
In the code you gave, you did not reduce dimension though, you splitted the data into two dataset (feature dimension is the same, 2048, only the number of samples changed). Training on a smaller dataset most of the time results in worst accuracy (data = information, when you leave some out you lose information). But splitting data allows you to test overfitting in particular, which is very impotant. But once the best parameters chosen (see cross-validation) you should learn on all the data you have!
Given your 0.7*800=560 samples, I think a depth of 13 is pretty big and you might overfit. You may want to play with this parameter first if you want to improve your accuracy!
1) Often reducing the features space does not help with accuracy, and using a regularized classifier leads to better results.
2) To do feature selection, you need two methods: one to reduce the set of features, another that does the actual supervised task (classification here).
Have you tried just using the standard classifiers? Clearly you tried the RF, but I'd also try a linear method like LinearSVC/LogisticRegression or a kernel SVC.
If you want to do feature selection, what you need to do is something like this:
feature_selector = LinearSVC(penalty='l1') #or maybe start with SelectKBest()
feature_selector.train(X_train, y_train)
X_train_reduced = feature_selector.transform(X_train)
X_test_reduced = feature_selector.transform(X_test)
classifier = RandomForestClassifier().fit(X_train_reduced, y_train)
prediction = classifier.predict(X_test_reduced)
Or you use a pipeline, as here: http://scikit-learn.org/dev/auto_examples/feature_selection/feature_selection_pipeline.html
Maybe we should add a version without the pipeline to the examples?
[cross-posted from the mailing list where this was originally asked]
Dimensionality reduction or feature selection is definitely advisable if you have more features than samples. You could look into Principal Component Analysis and other modules in sklearn.decomposition to reduce the number of features. There is also a useful section on Feature Selection in the scikit-learn documentation.
After fitting sklearn.decomposition.PCA, you could inspect the explained_variance_ratio_ to determine an advisable number of features (n_components) to reduce to (the point of PCA here is to find a reduced number of features that captures most of the variance in your original feature space). Some might like to retain features that have a cumulative explained_variance_ratio_ above 0.9, 0.95 etc, some like to drop features beyond which the explained_variance_ratio_ drops suddenly. Then refit the PCA with the n_components you like, transform your X_train and X_test, and fit your classifier as above.

Perlin's Noise with OpenGL

I was studying Perlin's Noise through some examples # http://dindinx.net/OpenGL/index.php?menu=exemples&submenu=shaders and couldn't help to notice that his make3DNoiseTexture() in perlin.c uses noise3(ni) instead of PerlinNoise3D(...)
Now why is that? Isn't Perlin's Noise supposed to be a summation of different noise frequencies and amplitudes?
Qestion 2 is what does ni, inci, incj, inck stand for? Why use ni instead of x,y coordinates? Why is ni incremented with
ni[0]+=inci;
inci = 1.0 / (Noise3DTexSize / frequency);
I see Hugo Elias created his Perlin2D with x,y coordinates, and so does PerlinNoise3D(...).
Thanks in advance :)
I now understand why and am going to answer my own question in hopes that it helps other people.
Perlin's Noise is actually a synthesis of gradient noises. In its production process, we must compute the dot product of a vector pointing from one of the corners flooring the input point to the input point itself with the random-generated gradient vector.
Now if the input point were a whole number, such as the xyz coordinates of a texture you want to create, the dot product would always return 0, which would give you a flat noise. So instead, we use inci, incj, inck as an alternative index. Yep, just an index, nothing else.
Now returning to question 1, there are two methods to implement Perlin's Noise:
1.Calculate the noise values separately and store them in the RGBA slots in the texture
2.Synthesize the noises up before-hand and store them in one of the RGBA slots in the texture
noise3(ni) is the actual implementation of method 1, while PerlinNoise3D(...) suggests the latter.
In my personal opinion, method 1 is much better because you have much more flexibility over how you use each octave in your shaders.
My guess on the reason for using noise3(ni) in make3DNoiseTexture() instead if PerlinNoise3D(...) is that when you use that noise texture in your shader you want to be able to replicate and modify the functionality of PerlinNoise3D(...) directly in the shader.
My guess for the reasoning behind ni, inci, incj, inck is that using x,y,z of the volume directly don't give a good result so by scaling the the noise with the frequency instead it is possible to adjust the resolution of the noise independently from the volume size.