I have two sets of samplings, one distributes exponentially and the second- Bernoli (I used scipy.stats.expon and scipy.stats.bernoulli to fit my data).
Based on these sampling, I want to create two random generators that will enable me to sample numbers from the two distributions.
What alternatives are there for doing so?
How can I find the correct parameters for creating the random generators?
Use the rvs method to generate a sample using the estimated parameters. For example, suppose x holds my initial data.
In [56]: x
Out[56]:
array([ 0.366, 0.235, 0.286, 0.84 , 0.073, 0.108, 0.156, 0.029,
0.11 , 0.122, 0.227, 0.148, 0.095, 0.233, 0.317, 0.027])
Use scipy.stats.expon to fit the expononential distribution to this data. I assume we are interested in the usual case where the location parameter is 0, so I use floc=0 in the fit call.
In [57]: from scipy.stats import expon
In [58]: loc, scale = expon.fit(x, floc=0)
In [59]: scale
Out[59]: 0.21076203455218898
Now use those parameters to generate a random sample.
In [60]: sample = expon.rvs(loc=0, scale=scale, size=8)
In [61]: sample
Out[61]:
array([ 0.21576877, 0.23415911, 0.6547364 , 0.44424148, 0.07870868,
0.10415167, 0.12905163, 0.23428833])
Related
Using sklearn , I want to have 3 splits (i.e. n_splits = 3)in the sample dataset and have a Train/Test ratio as 70:30. I'm able split the set into 3 folds but not able to define the test size (similar to train_test_split method).Is there a way to do define test sample size in StratifiedKFold ?
from sklearn.model_selection import StratifiedKFold as SKF
skf = SKF(n_splits=3)
skf.get_n_splits(X, y)
for train_index, test_index in skf.split(X, y):
# Loops over 3 iterations to have Train test stratified split
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
StratifiedKFold does by definition a K-fold split. This is, the iterator returned will yield (K-1) sets for training while 1 set for testing. K is controlled by n_splits, and thus, it does create groups of n_samples/K, and use all combinations of K-1 for training/testing. Refer to wikipedia or google K-fold cross-validation for more info about it.
In short, the size of the test set will be 1/K (i.e. 1/n_splits), so you can tune that parameter to control the test size (e.g. n_splits=3 will have test split of size 1/3 = 33% of your data). However, StratifiedKFold will iterate over K groups of K-1, and might not be what you want.
Having said that, you might be interested in StratifiedShuffleSplit, which returns just configurable number of splits and train/test ratio. If you just want a single split, you can tune n_splits=1 and yet keep test_size=0.3 (or whatever ratio you want).
Dear pvlib users and devels.
I'm a researcher in computer science, not particularly expert in the simulation or modelling of solar panels. I'm interested in use pvlib since
we are trying to simulate the works of a small solar panel used for IoT
applications, in particular the panel spec are the following:
12.8% max efficiency, Vmp = 5.82V, size = 225 × 155 × 17 mm.
Before using pvlib, one of my collaborator wrote a code that compute the
irradiation directly from average monthly values calculated with PVWatt.
I was not really satisfied, so we are starting to use pvlib.
In the old code, we have the power and current of the panel calculated as:
W = Irradiation * PanelSize(m^2) * Efficiency
A = W / Vmp
The Irradiation, in Madrid, as been obtained with PVWatt, and this is
what my collaborator used:
DIrradiance = (2030.0,2960.0,4290.0,5110.0,5950.0,7090.0,7200.0,6340.0,4870.0,3130.0,2130.0,1700.0)
I'm trying to understand if pvlib compute values similar to the ones above, as averages over a day for each month. And the curve of production in day.
I wrote this to compare pvlib with our old model:
import math
import numpy as np
import datetime as dt
import matplotlib.pyplot as plt
import pandas as pd
import pvlib
from pvlib.location import Location
def irradiance(day,m):
DIrradiance =(2030.0,2960.0,4290.0,5110.0,5950.0,
7090.0,7200.0,6340.0,4870.0,3130.0,2130.0,1700.0)
madrid = Location(40.42, -3.70, 'Europe/Madrid', 600, 'Madrid')
times = pd.date_range(start=dt.datetime(2015,m,day,00,00),
end=dt.datetime(2015,m,day,23,59),
freq='60min')
spaout = pvlib.solarposition.spa_python(times, madrid.latitude, madrid.longitude)
spaout = spaout.assign(cosz=pd.Series(np.cos(np.deg2rad(spaout['zenith']))))
z = np.array(spaout['cosz'])
return z.clip(0)*(DIrradiance[m-1])
madrid = Location(40.42, -3.70, 'Europe/Madrid', 600, 'Madrid')
times = pd.date_range(start = dt.datetime(2015,8,15,00,00),
end = dt.datetime(2015,8,15,23,59),
freq='60min')
old = irradiance(15,8) # old model
new = madrid.get_clearsky(times) # pvlib irradiance
plt.plot(old,'r-') # compare them.
plt.plot(old/6.0,'y-') # old seems 6 times more..I do not know why
plt.plot(new['ghi'].values,'b-')
plt.show()
The code above compute the old irradiance, using the zenit angle. and compute the ghi values using the clear_sky. I do not understand if the values in ghi must be multiplied by the cos of zenit too, or not. Anyway
they are smaller by a factor of 6. What I'd like to have at the end is the
power and current in output from the panel (DC) without any inverter, and
we are not really interested at modelling it exactly, but at least, to
have a reasonable curve. We are able to capture from the panel the ampere
produced, and we want to compare the values from the measurements putting
the panel on the roof top with the values calculated by pvlib.
Any help on this would be really appreachiated. Thanks
Sorry Will I do not care a lot about my previous model since I'd like to move all code to pvlib. I followed your suggestion and I'm using irradiance.total_irrad, the code now looks in this way:
madrid = Location(40.42, -3.70, 'Europe/Madrid', 600, 'Madrid')
times = pd.date_range(start=dt.datetime(2015,1,1,00,00),
end=dt.datetime(2015,1,1,23,59),
freq='60min')
ephem_data = pvlib.solarposition.spa_python(times, madrid.latitude,
madrid.longitude)
irrad_data = madrid.get_clearsky(times)
AM = atmosphere.relativeairmass(ephem_data['apparent_zenith'])
total = irradiance.total_irrad(40, 180,
ephem_data['apparent_zenith'], ephem_data['azimuth'],
dni=irrad_data['dni'], ghi=irrad_data['ghi'],
dhi=irrad_data['dhi'], airmass=AM,
surface_type='urban')
poa = total['poa_global'].values
Now, I know the irradiance on POA, and I want to compute the output in Ampere: It is just
(poa*PANEL_EFFICIENCY*AREA) / VOLT_OUTPUT ?
It's not clear to me how you arrived at your values for DIrradiance or what the units are, so I can't comment much the discrepancies between the values. I'm guessing that it's some kind of monthly data since there are 12 values. If so, you'd need to calculate ~hourly pvlib irradiance data and then integrate it to check for consistency.
If your module will be tilted, you'll need to convert your ~hourly irradiance GHI, DNI, DHI values to plane of array (POA) irradiance using a transposition model. The irradiance.total_irrad function is the easiest way to do that.
The next steps depend on the IV characteristics of your module, the rest of the circuit, and how accurate you need the model to be.
I'm trying to solve a single first-order ODE using ODEINT. Following is the code. I expect to get 3 values of y for 3 time-points. The issue I'm struggling with is ability to pass nth value of mt and nt to calculate dydt. I think the ODEINT passes all 3 values of mt and nt, instead just 0th, 1st or 2nd, depending on the iteration. Because of this, I get this error:
RuntimeError: The size of the array returned by func (4) does not match the size of y0 (1).
Interestingly, if I replace the initial condition, which is (and should be) a single value as: a0= [2]*4, the code works, but gives me a 4X4 matrix as solution, which seems incorrect.
mt = np.array([3,7,4,2]) # Array of constants
nt = np.array([5,1,9,3]) # Array of constants
c1,c2,c3 = [-0.3,1.4,-0.5] # co-efficients
para = [mt,nt] # Packing parameters
#Test ODE function
def test (y,t,extra):
m,n = extra
dydt = c1*c2*m - c1*y - c3*n
return dydt
a0= [2] # Initial Condition
tspan = range(len(mt)) # Define tspan
#Solving the ODE
yt= odeint(test, a0,tspan,args=(para,))
#Plotting the ODE
plt.plot(tspan,yt,'g')
plt.title('Multiple Parameters Test')
plt.xlabel('Time')
plt.ylabel('Magnitude')
The first order differential equation is:
dy/dt = c1*(c2*mt-y(t)) - c3*nt
This equation represents a part of murine endocrine system, which I am trying to model. The system is analogous to a two-tank system, where the first tank receives a specific hormone [at an unknown rate] but our sensor will detect that level (mt) at specific time intervals (1 second). This tank then feeds into the second tank, where the level of this hormone (y) is detected by another sensor. I labeled the levels using separate variables because the sensors that detect the levels are independent of each other and are not calibrated to each other. 'c2' may be considered as the co-efficient that shows the correlation between the two levels. Also, the transfer of this hormone from tank 1 to tank 2 is diffusion-driven. This hormone is further consumed by a biochemical process (similar to a drain valve for the second tank). At the moment, it is unclear which parameters affect the consumption; however, another sensor can detect the amount of hormone (nt) being consumed at a specific time interval (1 second, in this case too).
Thus, mt and nt are the concentrations/levels of the hormone at specific time points. although only 4-element in length in the code, these arrays are much longer in my study. All sensors report the concentrations at 1 second interval - hence tspan consists of time points separated by 1 second.
The objective is to determine the concentration of this hormone in the second tank (y) mathematically and then optimize the values of these coefficients based on the experimental data. I was able to pass these arrays mt and nt to the defined ODE and solve using ODE45 in MATLAB with no issue. I've been running into this RunTimeError, while trying to replicate the code in Python.
As I mentioned in a comment, if you want to model this system using an ordinary differential equation, you have to make an assumption about the values of m and n between sample times. One possible model is to use linear interpolation. Here's a script that uses scipy.interpolate.interp1d to create the functions mfunc(t) and nfunc(t) based on the samples mt and nt.
import numpy as np
from scipy.integrate import odeint
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
mt = np.array([3,7,4,2]) # Array of constants
nt = np.array([5,1,9,3]) # Array of constants
c1, c2, c3 = [-0.3, 1.4, -0.5] # co-efficients
# Create linear interpolators for m(t) and n(t).
sample_times = np.arange(len(mt))
mfunc = interp1d(sample_times, mt, bounds_error=False, fill_value="extrapolate")
nfunc = interp1d(sample_times, nt, bounds_error=False, fill_value="extrapolate")
# Test ODE function
def test (y, t):
dydt = c1*c2*mfunc(t) - c1*y - c3*nfunc(t)
return dydt
a0 = [2] # Initial Condition
tspan = np.linspace(0, sample_times.max(), 8*len(sample_times)+1)
#tspan = sample_times
# Solving the ODE
yt = odeint(test, a0, tspan)
# Plotting the ODE
plt.plot(tspan, yt, 'g')
plt.title('Multiple Parameters Test')
plt.xlabel('Time')
plt.ylabel('Magnitude')
plt.show()
Here is the plot created by the script:
Note that instead of generating the solution only at sample_times (i.e. at times 0, 1, 2, and 3), I set tspan to a denser set of points. This shows the behavior of the model between sample times.
I have constructed a regression type of neural net (NN) with dropout by Tensorflow. I would like to know if it is possible to find which hidden units are dropped from the previous layer in the output file. Therefore, we could implement the NN results by C++ or Matlab.
The following is an example of Tensorflow model. There are three hidden layer with one output layer. After the 3rd sigmoid layer, there is a dropout with probability equal to 0.9. I would like to know if it is possible to know which hidden units in the 3rd sigmoid layer are dropped.
def multilayer_perceptron(_x, _weights, _biases):
layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(_x, _weights['h1']), _biases['b1']))
layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, _weights['h2']), _biases['b2']))
layer_3 = tf.nn.sigmoid(tf.add(tf.matmul(layer_2, _weights['h3']), _biases['b3']))
layer_d = tf.nn.dropout(layer_3, 0.9)
return tf.matmul(layer_d, _weights['out']) + _biases['out']
Thank you very much!
There is a way to get the mask of 0 and 1, and of shape layer_3.get_shape() produced by tf.nn.dropout().
The trick is to give a name to your dropout operation:
layer_d = tf.nn.dropout(layer_3, 0.9, name='my_dropout')
Then you can get the wanted mask through the TensorFlow graph:
graph = tf.get_default_graph()
mask = graph.get_tensor_by_name('my_dropout/Floor:0')
The tensor mask will be of same shape and type as layer_d, and will only have values 0 or 1. 0 corresponds to the dropped neurons.
Simple and idiomatic solution (although possibly slightly slower than Oliver's):
# generate mask
mask = tf.nn.dropout(tf.ones_like(layer),rate)
# apply mask
dropped_layer = layer * mask
As my title suggests, I'm trying to fit a Gaussian to some data and I'm just getting a straight line. I've been looking at these other discussion Gaussian fit for Python and Fitting a gaussian to a curve in Python which seem to suggest basically the same thing. I can make the code in those discussions work fine for the data they provide, but it won't do it for my data.
My code looks like this:
import pylab as plb
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from scipy import asarray as ar,exp
y = y - y[0] # to make it go to zero on both sides
x = range(len(y))
max_y = max(y)
n = len(y)
mean = sum(x*y)/n
sigma = np.sqrt(sum(y*(x-mean)**2)/n)
# Someone on a previous post seemed to think this needed to have the sqrt.
# Tried it without as well, made no difference.
def gaus(x,a,x0,sigma):
return a*exp(-(x-x0)**2/(2*sigma**2))
popt,pcov = curve_fit(gaus,x,y,p0=[max_y,mean,sigma])
# It was suggested in one of the other posts I looked at to make the
# first element of p0 be the maximum value of y.
# I also tried it as 1, but that did not work either
plt.plot(x,y,'b:',label='data')
plt.plot(x,gaus(x,*popt),'r:',label='fit')
plt.legend()
plt.title('Fig. 3 - Fit for Time Constant')
plt.xlabel('Time (s)')
plt.ylabel('Voltage (V)')
plt.show()
The data I am trying to fit is as follows:
y = array([ 6.95301373e+12, 9.62971320e+12, 1.32501876e+13,
1.81150568e+13, 2.46111132e+13, 3.32321345e+13,
4.45978682e+13, 5.94819771e+13, 7.88394616e+13,
1.03837779e+14, 1.35888594e+14, 1.76677210e+14,
2.28196006e+14, 2.92781632e+14, 3.73133045e+14,
4.72340762e+14, 5.93892782e+14, 7.41632194e+14,
9.19750269e+14, 1.13278296e+15, 1.38551838e+15,
1.68291212e+15, 2.02996957e+15, 2.43161742e+15,
2.89259207e+15, 3.41725793e+15, 4.00937676e+15,
4.67187762e+15, 5.40667931e+15, 6.21440313e+15,
7.09421973e+15, 8.04366842e+15, 9.05855930e+15,
1.01328502e+16, 1.12585509e+16, 1.24257598e+16,
1.36226443e+16, 1.48356404e+16, 1.60496345e+16,
1.72482199e+16, 1.84140400e+16, 1.95291969e+16,
2.05757166e+16, 2.15360187e+16, 2.23933053e+16,
2.31320228e+16, 2.37385276e+16, 2.42009864e+16,
2.45114362e+16, 2.46427484e+16, 2.45114362e+16,
2.42009864e+16, 2.37385276e+16, 2.31320228e+16,
2.23933053e+16, 2.15360187e+16, 2.05757166e+16,
1.95291969e+16, 1.84140400e+16, 1.72482199e+16,
1.60496345e+16, 1.48356404e+16, 1.36226443e+16,
1.24257598e+16, 1.12585509e+16, 1.01328502e+16,
9.05855930e+15, 8.04366842e+15, 7.09421973e+15,
6.21440313e+15, 5.40667931e+15, 4.67187762e+15,
4.00937676e+15, 3.41725793e+15, 2.89259207e+15,
2.43161742e+15, 2.02996957e+15, 1.68291212e+15,
1.38551838e+15, 1.13278296e+15, 9.19750269e+14,
7.41632194e+14, 5.93892782e+14, 4.72340762e+14,
3.73133045e+14, 2.92781632e+14, 2.28196006e+14,
1.76677210e+14, 1.35888594e+14, 1.03837779e+14,
7.88394616e+13, 5.94819771e+13, 4.45978682e+13,
3.32321345e+13, 2.46111132e+13, 1.81150568e+13,
1.32501876e+13, 9.62971320e+12, 6.95301373e+12,
4.98705540e+12])
I would show you what it looks like, but apparently I don't have enough reputation points...
Anyone got any idea why it's not fitting properly?
Thanks for your help :)
The importance of the initial guess, p0 in curve_fit's default argument list, cannot be stressed enough.
Notice that the docstring mentions that
[p0] If None, then the initial values will all be 1
So if you do not supply it, it will use an initial guess of 1 for all parameters you're trying to optimize for.
The choice of p0 affects the speed at which the underlying algorithm changes the guess vector p0 (ref. the documentation of least_squares).
When you look at the data that you have, you'll notice that the maximum and the mean, mu_0, of the Gaussian-like dataset y, are
2.4e16 and 49 respectively. With the peak value so large, the algorithm, would need to make drastic changes to its initial guess to reach that large value.
When you supply a good initial guess to the curve fitting algorithm, convergence is more likely to occur.
Using your data, you can supply a good initial guess for the peak_value, the mean and sigma, by writing them like this:
y = np.array([...]) # starting from the original dataset
x = np.arange(len(y))
peak_value = y.max()
mean = x[y.argmax()] # observation of the data shows that the peak is close to the center of the interval of the x-data
sigma = mean - np.where(y > peak_value * np.exp(-.5))[0][0] # when x is sigma in the gaussian model, the function evaluates to a*exp(-.5)
popt,pcov = curve_fit(gaus, x, y, p0=[peak_value, mean, sigma])
print(popt) # prints: [ 2.44402560e+16 4.90000000e+01 1.20588976e+01]
Note that in your code, for the mean you take sum(x*y)/n , which is strange, because this would modulate the gaussian by a polynome of degree 1 (it multiplies a gaussian with a monotonously increasing line of constant slope) before taking the mean. That will offset the mean value of y (in this case to the right). A similar remark can be made for your calculation of sigma.
Final remark: the histogram of y will not resemble a Gaussian, as y is already a Gaussian. The histogram will merely bin (count) values into different categories (answering the question "how many datapoints in y reach a value between [a, b]?").