opencv blur() vs. fspecial('average') - c++

my filterSize =2
h = fspecial('average', filterSize);
imageData = imfilter(imageData, h, 'replicate');
meaning my kernel is 1/4 1/4 and so on..
I'm using
cv::blur(dst, dst,cv::Size(filterSize,filterSize));
so I should get the same results.
however the openCV results has a duplicated line, probably due to circular or some other sort of padding for the matrix.
line 0 and line 1 are the same.
can you suggest for it to be without that line
openCV results:
0.027027026, 0.027027026, 0.022727273, 0.018427517, 0.017199017, 0.017813267,
0.027027026, 0.027027026, 0.022727273, 0.018427517, 0.017199017, 0.017813267,
0.02948403, 0.02948403, 0.027641278, 0.028255528, 0.03194103, 0.03194103,
0.054054055, 0.054054055, 0.055896807, 0.064496316, 0.077395573, 0.079852581,
0.11240786, 0.11240786, 0.11855037, 0.13513513, 0.14864865, 0.1566339,
0.16646191, 0.16646191, 0.17383292, 0.18611793, 0.18673219, 0.19471744,
0.18243243, 0.18243243, 0.19471744, 0.19594595, 0.18796068, 0.1928747,
Matlab results:
0.0270270 0.022727 0.018427 0.017199 0.017813 0.0251842
0.029484 0.027641 0.028255 0.031941 0.031941 0.0350122
0.0540540 0.055896 0.064496 0.077395 0.079852 0.0847665
0.1124078 0.118550 0.135135 0.148648 0.156633 0.1683046
0.1664619 0.173832 0.186117 0.186732 0.194717 0.2057739
0.1824324 0.194717 0.195945 0.187960 0.192874 0.1947174

Related

How to convert a GRADS ctl GRIB file to a NetCDF?

I am having problems converting a GrADs ctl GRIB data file to NetCDF format. From the Data Integration and Analysis System, I downloaded 6-hourly JRA-55 reanalysis data for an entire month. It downloads in the form of a DAT file and there is a supporting GrADs ctl file that says that the data type is in GRIB format.
The contents of the anl_surf.ctl file is as follows:
dset ^anl_surf.%y4%m2%d2%h2
index ^anl_surf.idx
undef 9.999E+20
title anl_surf
* produced by grib2ctl v0.9.12.5p41
dtype grib 255
options template
ydef 320 levels
-89.570 -89.013 -88.453 -87.892 -87.331 -86.769 -86.208 -85.647 -85.085 -84.523
-83.962 -83.400 -82.839 -82.277 -81.716 -81.154 -80.592 -80.031 -79.469 -78.908
-78.346 -77.784 -77.223 -76.661 -76.100 -75.538 -74.976 -74.415 -73.853 -73.291
-72.730 -72.168 -71.607 -71.045 -70.483 -69.922 -69.360 -68.799 -68.237 -67.675
-67.114 -66.552 -65.990 -65.429 -64.867 -64.306 -63.744 -63.182 -62.621 -62.059
-61.498 -60.936 -60.374 -59.813 -59.251 -58.689 -58.128 -57.566 -57.005 -56.443
-55.881 -55.320 -54.758 -54.196 -53.635 -53.073 -52.512 -51.950 -51.388 -50.827
-50.265 -49.704 -49.142 -48.580 -48.019 -47.457 -46.895 -46.334 -45.772 -45.211
-44.649 -44.087 -43.526 -42.964 -42.402 -41.841 -41.279 -40.718 -40.156 -39.594
-39.033 -38.471 -37.909 -37.348 -36.786 -36.225 -35.663 -35.101 -34.540 -33.978
-33.416 -32.855 -32.293 -31.732 -31.170 -30.608 -30.047 -29.485 -28.924 -28.362
-27.800 -27.239 -26.677 -26.115 -25.554 -24.992 -24.431 -23.869 -23.307 -22.746
-22.184 -21.622 -21.061 -20.499 -19.938 -19.376 -18.814 -18.253 -17.691 -17.129
-16.568 -16.006 -15.445 -14.883 -14.321 -13.760 -13.198 -12.636 -12.075 -11.513
-10.952 -10.390 -9.828 -9.267 -8.705 -8.144 -7.582 -7.020 -6.459 -5.897
-5.335 -4.774 -4.212 -3.651 -3.089 -2.527 -1.966 -1.404 -0.842 -0.281
0.281 0.842 1.404 1.966 2.527 3.089 3.651 4.212 4.774 5.335
5.897 6.459 7.020 7.582 8.144 8.705 9.267 9.828 10.390 10.952
11.513 12.075 12.636 13.198 13.760 14.321 14.883 15.445 16.006 16.568
17.129 17.691 18.253 18.814 19.376 19.938 20.499 21.061 21.622 22.184
22.746 23.307 23.869 24.431 24.992 25.554 26.115 26.677 27.239 27.800
28.362 28.924 29.485 30.047 30.608 31.170 31.732 32.293 32.855 33.416
33.978 34.540 35.101 35.663 36.225 36.786 37.348 37.909 38.471 39.033
39.594 40.156 40.718 41.279 41.841 42.402 42.964 43.526 44.087 44.649
45.211 45.772 46.334 46.895 47.457 48.019 48.580 49.142 49.704 50.265
50.827 51.388 51.950 52.512 53.073 53.635 54.196 54.758 55.320 55.881
56.443 57.005 57.566 58.128 58.689 59.251 59.813 60.374 60.936 61.498
62.059 62.621 63.182 63.744 64.306 64.867 65.429 65.990 66.552 67.114
67.675 68.237 68.799 69.360 69.922 70.483 71.045 71.607 72.168 72.730
73.291 73.853 74.415 74.976 75.538 76.100 76.661 77.223 77.784 78.346
78.908 79.469 80.031 80.592 81.154 81.716 82.277 82.839 83.400 83.962
84.523 85.085 85.647 86.208 86.769 87.331 87.892 88.453 89.013 89.570
xdef 640 linear 0.000000 0.562500
pdef 157792 1 file 1 stream binary-big ^TL319.pdef
tdef 120 linear 00Z01Apr1958 6hr
zdef 1 linear 1 1
vars 7
POTsfc 0 13,1,0 ** surface Potential temperature [K]
PRESsfc 0 1,1,0 ** surface Pressure [Pa]
RH2m 0 52,105,2 ** 2 m above ground Relative humidity [%]
SPFH2m 0 51,105,2 ** 2 m above ground Specific humidity [kg/kg]
TMP2m 0 11,105,2 ** 2 m above ground Temperature [K]
UGRD10m 0 33,105,10 ** 10 m above ground u-component of wind [m/s]
VGRD10m 0 34,105,10 ** 10 m above ground v-component of wind [m/s]
ENDVARS
Using someone's answer from a similarly asked question I used the command:
cdo -f nc import_binary anl_surf.ctl anl_surf.nc
But I receive the following error:
Open Error: Unknown keyword in description file
--> The invalid description file record is:
--> index ^anl_surf.idx
The data file was not opened.
cdo import_binary (Abort): Open failed!
I found out that this error is a result of that fact that the INDEX component is not supported by CDO and the import_binary operator does not support the GRIB format.
Does anyone know if there is an operator that supports converting a GrADS ctl file with data in GRIB format to a NetCDF. Unfortunately, I cannot download this data directly in the GRIB format, only in DAT format. Any help is appreciated, thank you!

(OpenCV / DNN) Face Recognition isn't working - Euklidian distance is always 0

I want to verify if one person is similar to another person. Therefore I want to get the similarity between two faces.
These are the input faces
Mindy Face
Madonna Face
Now I want to push them through the DNN and after that I want to get the Euklidian distance between the two resulting matrices.
I've used the following OpenFace model: https://storage.cmusatyalab.org/openface-models/nn4.small2.v1.t7
This is my code for calculating the distances:
cv::Mat madonna = cv::imread("/home/der/Madonna_Face.jpg");
cv::Mat mindy = cv::imread("/home/der/Mindy_Face.jpg");
cv::resize(madonna, madonna, cv::Size(96, 96));
cv::resize(mindy, mindy, cv::Size(96, 96));
cv::Mat madonnaBlob = cv::dnn::blobFromImage(madonna, 1.0 / 255, cv::Size(96, 96), cv::Scalar{0,0,0}, true, false);
cv::Mat mindyBlob = cv::dnn::blobFromImage(mindy, 1.0 / 255, cv::Size{96, 96}, cv::Scalar{0,0,0}, true, false);
cv::dnn::Net _net = cv::dnn::readNetFromTorch("/home/der/nn4.small2.v1.t7");
_net.setInput(madonnaBlob);
cv::Mat madonnaMat = _net.forward();
_net.setInput(mindyBlob);
cv::Mat mindyMat = _net.forward();
std::cout << cv::norm(madonnaMat, mindyMat);
And if I'm doing so the result from cv::norm is 0.
The representations are exactly the same:
std::vector<double> master = madonnaMat;
std::vector<double> slave = mindyMat;
for(int i; i < 128; i++) {
std::cout << master[i] << " # " << slave[i] << std::endl;
}
Output:
> -0.0865457 # -0.0865457
> 0.133816 # 0.133816
> -0.105774 # -0.105774
> 0.05389 # 0.05389
> -0.00391233 # -0.00391233
> ...
Results:
Madonna Representation: [-0.060358506, 0.14156586, -0.10181303, 0.060315549, 0.0016125928, 0.066964693, -0.044892643, -0.043857966, 0.088271223, 0.047121659, 0.078663647, 0.025775915, 0.062051967, 0.034234334, -0.049976062, 0.045926169, 0.084343202, 0.046965379, -0.092582494, 0.13601208, -0.003582818, -0.15382886, 0.075037867, 0.19894752, -0.041007876, -0.12050319, -0.056161541, -0.018724455, 0.024790274, 0.0092850979, 0.095108159, 0.067354925, 0.06044127, 0.041365273, -0.12024247, 0.18279234, 0.027767293, 0.09874554, -0.16951905, 0.062370241, -0.014530737, 0.015518869, -0.0056175897, -0.066358574, -0.02390888, -0.07608442, 0.13011196, 0.031423025, -0.010443882, 0.12755248, -0.010195011, 0.0051672528, -0.10453289, -0.013270194, 0.096139617, 0.10375636, -0.047089621, 0.050923191, 0.066422582, -0.046726897, -0.1845296, 0.031028474, 0.086226918, -0.27064508, 0.055891197, -0.0053421594, 0.035870265, -0.026942547, -0.17279817, 0.13772435, 0.0071162563, 0.075375959, -0.046405111, 0.12658595, 0.11093359, 0.0030428318, 0.070016958, 0.1725318, -0.056130294, -0.14420295, -0.12438529, 0.056423288, -0.080888703, -0.052004829, -0.06481386, 0.14163122, -0.059617694, -0.026075639, 0.052098148, -0.0055074869, -0.014869845, -0.11943244, 0.068051606, -0.096071519, 0.19727865, -0.016027609, -0.05776047, 0.069935486, -0.020494614, 0.013407955, -0.06065508, -0.056143567, -0.04608072, 0.072748154, -0.035580911, 0.15261506, -0.074352823, -0.081481896, 0.020475708, -0.021581693, -0.16350025, 0.12794609, 0.082243897, 0.015881324, 0.011330541, -0.026391003, 0.086644463, -0.10490314, 0.088207267, 0.17892174, 0.025871141, 0.012454472, 0.010682535, 0.1253885, -0.12909022, 0.082067415, -0.035789803, 0.032903988]
Madonna Size: 1 x 128
Madonna Dims: 2
Mindy Representation: [-0.082645342, 0.14463238, -0.10716592, 0.065654278, 0.0045089996, 0.064019054, -0.047334831, -0.056190431, 0.099919245, 0.048234992, 0.068906084, 0.028518379, 0.057044145, 0.046223734, -0.056203742, 0.033566523, 0.082230642, 0.055683684, -0.080982864, 0.12431844, -0.00075431512, -0.14511517, 0.045022864, 0.20965824, -0.030178605, -0.11852413, -0.066858761, -0.01461118, 0.032898057, 0.02857255, 0.1088237, 0.07066118, 0.044605579, 0.022743503, -0.10785796, 0.20373915, 0.010565795, 0.063950166, -0.18701579, 0.062780239, -0.0042907735, 0.031276166, -0.006556896, -0.038440779, -0.01419229, -0.072688736, 0.13676986, 0.040385362, 0.010314438, 0.095734902, -0.0080824783, 0.011763249, -0.098884396, -0.040797569, 0.10534941, 0.12088351, -0.07317061, 0.063644305, 0.0830286, -0.050620016, -0.18088549, 0.03330183, 0.090282671, -0.25393733, 0.056058947, -0.020288708, 0.049997903, -0.044997148, -0.15860014, 0.15251927, 0.015151619, 0.088731326, -0.028061632, 0.11127418, 0.090425298, 0.0052096732, 0.053858042, 0.18543676, -0.066999368, -0.15851147, -0.11389373, 0.088093147, -0.08713299, -0.048095752, -0.063261949, 0.12453313, -0.051213119, -0.023759408, 0.048403475, -0.012721839, -0.021282939, -0.098075315, 0.066707589, -0.11601795, 0.20438787, -0.015739718, -0.052848384, 0.057336167, -0.01592578, 0.014057826, -0.058749981, -0.043632519, -0.031006066, 0.046038814, -0.065755703, 0.15442967, -0.082077362, -0.099808514, 0.016168201, 0.0046916353, -0.14556217, 0.11152669, 0.062443323, -0.00032889194, 0.0020548289, -0.026999777, 0.096809812, -0.11947374, 0.085579365, 0.16317753, 0.028130196, 0.014577032, 0.0079531483, 0.11340163, -0.15006165, 0.094127603, -0.0440454, 0.033095147]
Mindy Size: 1 x 128
Mindy Dims: 2
Any ideas what I'm doing wrong? Thanks.
I've experienced this several times. I couldn't find this explicitly mentioned in the OpenCV documentation, but the cv::dnn::Net::forward function returns a cv::Mat blob with the data member pointing always to the same zone of memory. Therefore on the second forward pass, that zone of memory is overwritten and both madonnaBlob and mindyBlob point there.
As #Christoph Rackwitz pointed out, you need to clone the cv::Mat before running the second inference:
_net.setInput(madonnaBlob);
cv::Mat madonnaMat = _net.forward();
madonnaMat = madonnaMat.clone(); // Copy memory
_net.setInput(mindyBlob);
cv::Mat mindyMat = _net.forward();

Pytorch loading saved weights but not working

I have this code. I save weights after every epoch and the code saves it. But when I load the weights the loss value starts from the initial loss value which means the loading is failing somehow.
net = torchvision.models.resnet18(pretrained=True)
num_ftrs = net.fc.in_features
net.fc = nn.Linear(num_ftrs, 136)
def train():
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
criterion = L1Loss(reduction='sum')
lr = 0.0000001
optimizer = optim.Adam(net.parameters(), lr=lr, weight_decay=0.0005)
net.to(device)
state = torch.load('face2.txt')
net.load_state_dict(state['state_dict'])
optimizer.load_state_dict(state['optimizer'])
for epoch in range(int(0), 200000):
for batch, data in enumerate(trainloader, 0):
torch.cuda.empty_cache()
inputs, labels = data
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
outputs = net(inputs).reshape(-1, 68, 2)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
state = {
'epoch': epoch,
'state_dict': net.state_dict(),
'optimizer': optimizer.state_dict(),
}
torch.save(state, 'face2.txt')
if __name__ == '__main__':
train()
The initial loss is 50k plus and after some thousand epochs the loss becomes 50-60. Now when I re-run the code I expect it to start from a near loss value but it starts from around 50k again.
The code you wrote:
net = torchvision.models.resnet18(pretrained=True)
means you start over again with the same net - pretrained resnet18. Instead, you should load the last state if present, this would fix your problem.
I will slightly update your notation:
state = {
'epoch': epoch,
'model_state_dict': net.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
...
}
torch.save(state, filepath)
Learnable parameters are the first state_dict (model state dict).
The second state_dict is the optimizer state dict. You recall that the optimizer is used to improve our learnable parameters. But the optimizer state_dict is fixed. Nothing to learn in there.
Your code at some point should look like:
model.load_state_dict(state['model_state_dict'])
optimizer.load_state_dict(state['optimizer_state_dict'])

peak fitting lmfit nan outcome

trying to fit a peak to some data like this:
import scipy
from lmfit.models import GaussianModel
x = shm.PTR_P
y = shm.PORE_V_P
mod = LorentzianModel()
pars = mod.guess(y, x=x)
out = mod.fit(y, pars, x=x)
print(out.fit_report(min_correl=0.25))
and while I can do the fit if I generate the data, if I try to read them from another source it doesn't work. Am not too good at this and have no idea what the issue is. Here is the output:
[x,y]
[(34.145490000000002, 3.4599999999999999e-08),
(29.286449999999999, 4.8399999999999997e-08),
(25.118860000000002, 0.026773140000000001),
(21.544350000000001, 0.068791409999999997),
(18.4785, 0.083200979999999994),
(15.848929999999999, 0.02123179),
(11.659139999999999, 0.01551077),
(10.0, 0.084493879999999993),
(6.3095739999999996, 2.0899999999999998e-07),
(5.4116949999999999, 0.045209140000000002),
(4.6415889999999997, 0.054789360000000002),
(3.4145489999999996, 8.9399999999999993e-08),
(2.9286449999999999, 0.01100814),
(2.5118860000000001, 0.088990659999999999),
(1.84785, 3.5799999999999995e-07),
(1.5848930000000001, 0.099999009999999999),
(1.359356, 0.075139280000000003),
(1.1659139999999999, 0.167379),
(1.0, 0.57693050000000001),
(0.85769590000000007, 1.8658159999999999),
(0.73564230000000008, 8.4961369999999992),
(0.6309574, 25.299099999999999),
(0.54116949999999997, 21.413350000000001),
(0.46415889999999999, 13.408829999999998),
(0.39810719999999999, 8.3584750000000003),
(0.34145490000000006, 5.3982010000000002),
(0.29286440000000002, 3.7518540000000002),
(0.25118859999999998, 2.5325389999999999),
(0.21544349999999998, 1.7722470000000001),
(0.18478499999999998, 1.445808),
(0.15848929999999997, 1.182083),
(0.13593559999999999, 0.94957730000000007),
(0.1165914, 0.67620849999999999),
(0.10000000000000001, 0.46631620000000001),
(0.085769590000000007, 0.41001890000000002),
(0.07356422, 0.30625920000000001),
(0.063095730000000003, 0.24040219999999998),
(0.054116949999999997, 0.1942596),
(0.046415890000000001, 0.11306760000000002),
(0.039810720000000001, 0.099998470000000006),
(0.034145490000000001, 0.099998470000000006),
(0.029286449999999999, 0.02246857),
(0.025118870000000001, 0.077529909999999994)]
I would guess that either there are NaNs in your data or that the initial guess is so far off as to produce NaNs. The initial guess of 0 for sigma seems suspicious -- this should be > 0.
Either way, plotting the data and using np.isnan() would probably help isolate the problem.

Matplotlib Concentration plot can not show up [Python 2.7 (with Eclipse) ]

I would like to ask for help.
Please for help me with my code in Python 2.7. I want plotting concentration of gas release.
Two weeks ago I started with Python, I did not see it before, I have basic knowledge with Pascal, but it is different language.
I do not know why does not work...
This picture showing how it must look like:
flickr.com/photos/95370861#N02/8696658905/in/photostream/lightbox/
My code:
from __future__ import division
from matplotlib.patches import Polygon, Rectangle
import math
import numpy as np
import matplotlib.pyplot as plt
def draw(x, y,ax, color, lw=1):
xr, yr = np.copy(x)[::-1], np.copy(y)[::-1]
for i in range(len(y)):
y[i] = -y[i]
verts = zip(x.tolist() + xr.tolist(), y.tolist() + yr.tolist())
poly = Polygon(verts, facecolor=color, edgecolor='k', lw=lw)
ax.add_patch(poly)
def concentration_chart_plume(x,y, ax, ticks, colors, title):
maxX = 0
for tick, color in zip(ticks[::-1], colors):
x, y = calculation(tick)
draw(np.array(x), np.array(y), ax, color)
maxX = max([maxX] + x)
''' plot parameters '''
plt.xlim(0, maxX)
plt.axis('equal')
plt.xlabel('$x[m]$')
plt.ylabel('$y[m]$')
plt.title(title)
sqrs = [Rectangle((0, 0), 1, 1, fc=clr) for clr in colors[::-1]]
desc = ["> %g" % tick for tick in ticks]
plt.legend(sqrs, desc, title="Concentration $kg/m^3$")
plt.grid()
def calculation(tick,m):
cx = FILE_1
sy = FILE_2
x = FILE_3
y = [sy * np.sqrt(2 * np.log(cx / (tick)))] #I do not know, how can be limited to non-zero values
return (x, y)
if __name__ == "__main__":
''' Concentration plot of continuous gas release '''
title = u"conturs"
colors = ["#9EFF00", "#FFF500", "#FF9900", "#FF2E00"]
ticks = [0.1, 1e-2, 1e-3, 1e-4] # value of concentration kg/m3, I want to show up
fig = plt.figure()
ax = fig.gca()
plt.show()
FILE_1 values:
[ 2.21000000e+00 2.19000000e+00 2.16000000e+00 2.12000000e+00
2.08000000e+00 1.90000000e+00 1.71000000e+00 1.18000000e+00
7.93000000e-01 3.55000000e-01 1.93000000e-01 7.02500000e-02
3.70800000e-02 1.37100000e-02 7.42100000e-03 4.77800000e-03
3.39200000e-03 2.56200000e-03 2.02100000e-03 1.64600000e-03
1.37300000e-03 1.16800000e-03 1.00900000e-03 8.82300000e-04
7.80300000e-04 6.96400000e-04 6.26500000e-04 5.67500000e-04
5.17200000e-04 4.73800000e-04 4.36200000e-04 4.03200000e-04
3.74200000e-04 3.48500000e-04 3.25600000e-04 3.05100000e-04
2.86700000e-04 2.70000000e-04 2.54900000e-04 2.28500000e-04
2.17000000e-04 1.96700000e-04 1.87700000e-04 1.71600000e-04
1.64300000e-04 1.51300000e-04 1.45500000e-04 1.34800000e-04
1.29900000e-04 1.21100000e-04 1.17000000e-04 1.09500000e-04
1.06000000e-04 9.96600000e-05 9.67000000e-05 9.12100000e-05
8.86500000e-05 8.38800000e-05 8.16600000e-05 7.74900000e-05
7.55300000e-05 7.18600000e-05 7.01300000e-05 6.68800000e-05
6.53500000e-05 6.38700000e-05 6.24500000e-05 6.10800000e-05
5.97600000e-05 5.84800000e-05 5.72600000e-05 5.60700000e-05
5.49200000e-05 5.38100000e-05 5.27400000e-05 5.17100000e-05
5.07000000e-05 4.97300000e-05 4.92400000e-05 4.72200000e-05
4.35700000e-05 4.19100000e-05 3.88600000e-05 3.74700000e-05
3.49000000e-05 3.37200000e-05 3.15400000e-05 3.05300000e-05
2.86500000e-05 2.77800000e-05 2.61500000e-05 2.54000000e-05
2.39800000e-05 2.33200000e-05 2.20700000e-05 2.14900000e-05
2.03900000e-05 1.98700000e-05 1.89000000e-05 1.84300000e-05
1.75600000e-05 1.71500000e-05 1.63700000e-05 1.60000000e-05
1.53000000e-05 1.49700000e-05 1.43300000e-05 1.40300000e-05
1.34600000e-05 1.31800000e-05 1.26600000e-05 1.24100000e-05
1.19400000e-05 1.17100000e-05 1.12700000e-05 1.10600000e-05
1.06600000e-05 1.04700000e-05 1.01000000e-05 9.92500000e-06
9.58500000e-06 9.42200000e-06 9.10900000e-06 8.81100000e-06]
and FILE_2 values:
[ 9.21500000e-07 1.68000000e-01 3.13000000e-01 4.47000000e-01
5.58000000e-01 8.37000000e-01 1.08000000e+00 1.67000000e+00
2.20000000e+00 3.31000000e+00 4.30000000e+00 6.53000000e+00
8.50000000e+00 1.29000000e+01 1.68000000e+01 2.03000000e+01
2.37000000e+01 2.68000000e+01 2.98000000e+01 3.27000000e+01
3.55000000e+01 3.82000000e+01 4.08000000e+01 4.34000000e+01
4.59000000e+01 4.83000000e+01 5.07000000e+01 5.31000000e+01
5.54000000e+01 5.77000000e+01 5.99000000e+01 6.21000000e+01
6.43000000e+01 6.65000000e+01 6.86000000e+01 7.07000000e+01
7.28000000e+01 7.49000000e+01 7.69000000e+01 8.09000000e+01
8.29000000e+01 8.68000000e+01 8.87000000e+01 9.25000000e+01
9.44000000e+01 9.81000000e+01 9.99000000e+01 1.04000000e+02
1.05000000e+02 1.09000000e+02 1.11000000e+02 1.14000000e+02
1.16000000e+02 1.19000000e+02 1.21000000e+02 1.24000000e+02
1.26000000e+02 1.29000000e+02 1.31000000e+02 1.34000000e+02
1.36000000e+02 1.39000000e+02 1.41000000e+02 1.44000000e+02
1.46000000e+02 1.47000000e+02 1.49000000e+02 1.50000000e+02
1.52000000e+02 1.53000000e+02 1.55000000e+02 1.57000000e+02
1.58000000e+02 1.60000000e+02 1.61000000e+02 1.63000000e+02
1.64000000e+02 1.66000000e+02 1.66000000e+02 1.69000000e+02
1.73000000e+02 1.75000000e+02 1.80000000e+02 1.82000000e+02
1.87000000e+02 1.89000000e+02 1.93000000e+02 1.96000000e+02
2.00000000e+02 2.02000000e+02 2.07000000e+02 2.09000000e+02
2.13000000e+02 2.15000000e+02 2.20000000e+02 2.22000000e+02
2.26000000e+02 2.28000000e+02 2.33000000e+02 2.35000000e+02
2.39000000e+02 2.41000000e+02 2.46000000e+02 2.48000000e+02
2.52000000e+02 2.54000000e+02 2.59000000e+02 2.61000000e+02
2.65000000e+02 2.67000000e+02 2.72000000e+02 2.74000000e+02
2.78000000e+02 2.80000000e+02 2.84000000e+02 2.86000000e+02
2.91000000e+02 2.93000000e+02 2.97000000e+02 2.99000000e+02
3.03000000e+02 3.06000000e+02 3.10000000e+02 3.14000000e+02]
and FILE_3 values:
[ 2.59000000e+00 2.69000000e+00 2.92000000e+00 3.23000000e+00
3.54000000e+00 4.48000000e+00 5.41000000e+00 8.00000000e+00
1.06000000e+01 1.66000000e+01 2.27000000e+01 3.82000000e+01
5.37000000e+01 9.37000000e+01 1.34000000e+02 1.74000000e+02
2.14000000e+02 2.54000000e+02 2.94000000e+02 3.34000000e+02
3.74000000e+02 4.14000000e+02 4.54000000e+02 4.94000000e+02
5.34000000e+02 5.74000000e+02 6.14000000e+02 6.54000000e+02
6.94000000e+02 7.34000000e+02 7.74000000e+02 8.14000000e+02
8.54000000e+02 8.94000000e+02 9.34000000e+02 9.74000000e+02
1.01400000e+03 1.05400000e+03 1.09400000e+03 1.17400000e+03
1.21400000e+03 1.29400000e+03 1.33400000e+03 1.41400000e+03
1.45400000e+03 1.53400000e+03 1.57400000e+03 1.65400000e+03
1.69400000e+03 1.77400000e+03 1.81400000e+03 1.89400000e+03
1.93400000e+03 2.01400000e+03 2.05400000e+03 2.13400000e+03
2.17400000e+03 2.25400000e+03 2.29400000e+03 2.37400000e+03
2.41400000e+03 2.49400000e+03 2.53400000e+03 2.61400000e+03
2.65400000e+03 2.69400000e+03 2.73400000e+03 2.77400000e+03
2.81400000e+03 2.85400000e+03 2.89400000e+03 2.93400000e+03
2.97400000e+03 3.01400000e+03 3.05400000e+03 3.09400000e+03
3.13400000e+03 3.17400000e+03 3.19500000e+03 3.25500000e+03
3.37500000e+03 3.43500000e+03 3.55500000e+03 3.61500000e+03
3.73500000e+03 3.79500000e+03 3.91500000e+03 3.97500000e+03
4.09500000e+03 4.15500000e+03 4.27500000e+03 4.33500000e+03
4.45500000e+03 4.51500000e+03 4.63500000e+03 4.69500000e+03
4.81500000e+03 4.87500000e+03 4.99500000e+03 5.05500000e+03
5.17500000e+03 5.23500000e+03 5.35500000e+03 5.41500000e+03
5.53500000e+03 5.59500000e+03 5.71500000e+03 5.77500000e+03
5.89500000e+03 5.95500000e+03 6.07500000e+03 6.13500000e+03
6.25500000e+03 6.31500000e+03 6.43500000e+03 6.49500000e+03
6.61500000e+03 6.67500000e+03 6.79500000e+03 6.85500000e+03
6.97500000e+03 7.03500000e+03 7.15500000e+03 7.27500000e+03]
Thank you for your help
Prema
Step 1: Make sure you can actually see a simple plot.
You need to set up python so that the following script shows you something:
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0,2*np.pi)
y = np.sin(x)
plt.plot(x,y)
plt.show()
If this doesn't work, it's no surprise that you're not seeing anything with your more complicated plot. Getting this working depends on the details of your system, but a good place to start is by figuring out which backend you want to use.
For interactive use, most people who need to interact with plots frequently will just use ipython with pylab, started from the command line as
ipython --pylab
You may need to install ipython and pylab to get this working.
Step 1b: Save output figure?
Alternatively, if the interactive plot isn't working for you, or you just want to save the figure to a file, you can replace plt.show() with
plt.savefig('/path/to/file.png')
You can replace png with pdf or various other file types.
Step 2: Get your code working.
The code you've posted contains several errors, and no request to draw a plot. Here are some corrections I see. Your function definitions have too many variables; they should be
def concentration_chart_plume(ax, ticks, colors, title):
and
def calculation(tick):
Your equation should be something like
y = np.asarray(sy) * np.sqrt(2 * np.log(np.asarray(cx) / (tick)))
Python doesn't know how to multiply lists together, so you have to make them into numpy arrays. Also, you should insert as the second-to-last line of your code (right before plt.show()) the line
concentration_chart_plume(ax, ticks, colors, title)
which actually draws the plot.
Finally, I assume that you're doing something sensible so that your script gets the right values of FILE_1, FILE_2, and FILE_3.
Step 3: Check your equation
I don't know what you're trying to plot, but the equation you're using gives invalid values (as you seem to know) any time the element in cx is less than 1.0. Is this really the right equation? If so, you might want to use something like
cx = [max(1.0, c) for c in cx]
after you set the values of cx, but before you use them. Then, the logarithm will always be 0.0 or greater, so that the square-root exists. This basically throws away smaller values of your data, so I'm assuming that they're just noise or something....
Step N+1: Enjoy the results
After making all of the changes listed in steps 2 and 3 (and correcting the spelling of the title), I get the following very-reasonable-looking plot: