Want to plot an array in gray scale image.Here is my array:
[[[ 0.27543858 0.30173767 -0.0101363 0.30631673 0.08575112
0.02205707 -0.15502007 0.11055402 -0.11761152]
[ 0.23695524 0.19820367 -0.08758862 0.02446048 0.29235974
-0.11381532 -0.00426369 0.15231356 -0.24601455]]]
Its dimension is (1, 2, 9). It should produce two images with 9 values each.
I have tried this so far:
col_size = 1
row_size = 2
index = 0
fig, ax = plt.subplots(row_size, col_size, figsize=(12,8))
for row in range(0,row_size):
for col in range(0,col_size):
ax[row][col].imshow(result_array[:,:,index],cmap='gray')
index += 1
plt.show()
Here is the answer:
result_array=result_array.reshape(2,3,3)
print(result_array)
print(result_array.shape)
col_size = 2
row_size = 1
filter_index = 0
fig, ax = plt.subplots(row_size, col_size, figsize=(12,8), squeeze=False)
for row in range(0,row_size):
for col in range(0,col_size):
ax[row][col].imshow(result_array[filter_index,:,:],cmap='gray')
filter_index += 1
plt.show()
Related
I represent a n*m matrix like chessboard.
1 0 2 0
0 3 0 4
5 0 6 0
0 7 0 8
I don't need to store the zeros in my 1d vector.
vector v = {1, 2, 3, 4.. etc}
I ask the user for a row and column number.
How can i return with i. row j. column element?
if (i+j) % 2 != 0
I return with 0, but i don't know what i need to do when
(i+j) % 2 == 0
Can you help me? (sorry for my bad English)
With regular matrices stored as 1D-vector, coordinate to index would be:
(i + j * width) (or i * height + j depending on convention).
with half case to 0, you just have to divide by 2:
if ((i + j) % 2 != 0) return 0;
else return data[(i + j * width) / 2];
I'm new to OpenCV (in C++) and image processing. I want, given a grayscale image to replace the value of each pixel computing the average value of the grayscale in a 3x3 neighborhood.
First of all I open the image
Mat img = imread(samples::findFile(argv[1]), IMREAD_GRAYSCALE);
// Example of image
[4 3 9 1,
2 9 8 0,
3 5 2 1,
7 5 8 3]
In order to get the average value of the 3x3 closest pixels of corners (top left, top right, bottom left and bottom right) I make a padding of the image: an 1x1x1x1 constant border
Mat imgPadding;
copyMakeBorder(img, imgPadding, 1,1,1,1, BORDER_CONSTANT, Scalar(0));
// Padding example
[0 0 0 0 0 0,
0 4 3 9 1 0,
0 2 9 8 0 0,
0 3 5 2 1 0,
0 7 5 8 3 0,
0 0 0 0 0 0]
Now I've got some troubles with the output image. I have tried in various ways, but no way brings me to the solution. I tried this, using mean() function to get the average grayscale value of the i,j-th 3x3 matrix got with Rect() method. The for loop starts from the first non-padding pixel and ends at the last non-padding pixel.
Mat imgAvg = Mat::zeros(img.rows, img.cols, img.type());
// initialization of the output Mat object with same input size and type
for (int i = 1; i < imgAvg.rows; i++)
for (int j = 1; j < imgAvg.cols; j++)
imgAvg.at<Scalar>(Point(j - 1, i - 1)) = mean(imgPadding(Rect(j - 1, i - 1, 3, 3)));
but I got this runtime error
main: malloc.c:2379: sysmalloc: Assertion `(old_top == initial_top (av) && old_size == 0) || ((unsigned long) (old_size) >= MINSIZE && prev_inuse (old_top) && ((unsigned long) old_end & (pagesize - 1)) == 0)' failed.
I tried also reducing randomly the range
for (int i = 1; i < imgAvg.rows - 35; i++)
for (int j = 1; j < imgAvg.cols - 35; j++)
imgAvg.at<Scalar>(Point(j - 1, i - 1)) = mean(imgPadding(Rect(j - 1, i - 1, 3, 3)));
and I got this weird output: screenshot
Thanks in advance!
EDIT:
Thank you all for the answers, I didn't know yet the blur() function.
In this way I import the image and simply call the blur function
Mat img = imread(samples::findFile(argv[1]), IMREAD_GRAYSCALE);
Mat imgAvg = Mat::zeros(img.rows, img.cols, img.type());
blur(img, imgAvg, Size(3, 3));
But since I'm still a beginner and I think the purpose of the exercise assigned to me was to write a "handmade" code, I tried also this working solution
for (int i = 1; i <= imgAvg.rows; i++)
for (int j = 1; j <= imgAvg.cols; j++)
imgAvg.at<uint8_t>(Point(j - 1, i - 1)) = mean(imgPadding(Rect(j - 1, i - 1, 3, 3)))[0];
Result of the algorithm (identical for both solutions)
Just apply a smoothing filter to the image - the blur function in the imgproc module should accomplish what you need. A good example is in the documentation: https://docs.opencv.org/3.4/dc/dd3/tutorial_gausian_median_blur_bilateral_filter.html
In this case, the arguments you need are the image (img), a destination image (dst), and kernel size (ksize), which is 3 in this case:
src = ...
Mat dst = Mat::zeros( src.size(), src.type() )
blur( src, dst, Size( 3, 3 ))
Smoothing manually will not be as performant, and is more prone to error.
Good luck!
What you want to do is called "box filtering" in image processing. In OpenCV you do:
cv::blur(src_img,
dest_img, // same shape and type as src, cannot be src
cv::Size(3, 3)) // use a kernel of size 3x3
The default padding is to reflect the border pixel, which won't skew the image statistics. See the documentation if you prefer a different border mode.
I am attempting to implement a perceptron. I have loaded a 100x2 array of values between 0 and 100. Each item in the array has a label of either -1 or 1.
I believe the perceptron is working, however I cannot plot decision boundary as shown here: plot decision boundary matplotlib
When I run my code I only see a single color background. I would expect to see two colors, one color for each label in my data set (-1 and 1).
My current output, I expect to see 2 colors for the background (-1 or 1)
An example of what I hope to see, from the sklearn documentation
import numpy as np
from matplotlib import pyplot as plt
def generate_data():
#generate a dataset that is linearly seperable
group_1 = np.random.randint(50, 100, size=(50,2))
group_1_labels = np.full((50,1), 1)
group_2 = np.random.randint(0, 49, size =(50,2))
group_2_labels = np.full((50,1), -1)
#add a bias value of -1
bias = np.full((50,1), -1)
#add labels, upper right quadrant are 1, lower left are -1
group_1_with_bias = np.hstack((group_1, bias))
group_2_with_bias = np.hstack((group_2, bias))
group_1_labeled = np.hstack((group_1_with_bias, group_1_labels))
group_2_labeled = np.hstack((group_2_with_bias, group_2_labels))
#merge our labeled data and shuffle!
merged_data = np.vstack((group_1_labeled, group_2_labeled))
np.random.shuffle(merged_data)
return merged_data
data = generate_data()
#load data, strip labels, add a -1 bias value
X = data[:, :3]
#create labels matrix
l = np.ravel(data[:, 3:])
def perceptron_sgd(X, l, c, epochs):
#initialize weights
w = np.zeros(3)
errors = []
for epoch in range(epochs):
total_error = 0
for i, x in enumerate(X):
if (np.dot(x, w) * l[i]) <= 0:
total_error += (np.dot(x, w) * l[i])
w = w + c * (x * l[i])
errors.append(total_error * -1)
print "epoch " + str(epoch) + ": " + str(w)
return w, errors
def classify(X, l, w):
z = np.dot(X, w)
print z
z[z <= 0] = -1
z[z > 0] = 1
#return a matrix of predicted labels
return z
w, errors = perceptron_sgd(X, l, .001, 36)
# X - some data in 2dimensional np.array
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, .2), np.arange(y_min, y_max, .2))
# here "model" is your model's prediction (classification) function
Z = classify(np.c_[xx.ravel(), yy.ravel()], l, w[:-1]) #strip the bias from weights
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
plt.axis('off')
#Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=l, cmap=plt.cm.Paired)
I got it to work.
Standardized your X
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(X[:, :-1])
X_trans = np.column_stack((scaler.transform(X[:, :-1]), X[:, -1]))
Better initialization than zero.
#initialize weights
r = np.sqrt(2)
w = np.random.uniform(-r, r, (3,))
Add learned biases during prediction
z = np.dot(X, w[:-1]) + w[-1]
Standardize during prediction as well (using standardization learned from input)
Z = classify(scaler.transform(np.c_[xx.ravel(), yy.ravel()]),
l, w) #strip the bias from weights
Generally, always a good idea to standardize the inputs.
Entire code:
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
def generate_data():
#generate a dataset that is linearly seperable
group_1 = np.random.randint(50, 100, size=(50,2))
group_1_labels = np.full((50,1), 1)
group_2 = np.random.randint(0, 49, size =(50,2))
group_2_labels = np.full((50,1), -1)
#add a bias value of -1
bias = np.full((50,1), -1)
#add labels, upper right quadrant are 1, lower left are -1
group_1_with_bias = np.hstack((group_1, bias))
group_2_with_bias = np.hstack((group_2, bias))
group_1_labeled = np.hstack((group_1_with_bias, group_1_labels))
group_2_labeled = np.hstack((group_2_with_bias, group_2_labels))
#merge our labeled data and shuffle!
merged_data = np.vstack((group_1_labeled, group_2_labeled))
np.random.shuffle(merged_data)
return merged_data
data = generate_data()
#load data, strip labels, add a -1 bias value
X = data[:, :3]
#create labels matrix
l = np.ravel(data[:, 3:])
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(X[:, :-1])
X_trans = np.column_stack((scaler.transform(X[:, :-1]), X[:, -1]))
def perceptron_sgd(X, l, c, epochs):
#initialize weights
r = np.sqrt(2)
w = np.random.uniform(-r, r, (3,))
errors = []
for epoch in range(epochs):
total_error = 0
for i, x in enumerate(X):
if (np.dot(x, w) * l[i]) <= 0:
total_error += (np.dot(x, w) * l[i])
w = w + c * (x * l[i])
errors.append(total_error * -1)
print("epoch " + str(epoch) + ": " + str(w))
return w, errors
def classify(X, l, w):
z = np.dot(X, w[:-1]) + w[-1]
print(z)
z[z <= 0] = -1
z[z > 0] = 1
#return a matrix of predicted labels
return z
w, errors = perceptron_sgd(X_trans, l, .01, 25)
# X - some data in 2dimensional np.array
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, .1), np.arange(y_min, y_max, .1))
# here "model" is your model's prediction (classification) function
Z = classify(scaler.transform(np.c_[xx.ravel(), yy.ravel()]), l, w) #strip the bias from weights
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, alpha=0.4)
#plt.axis('off')
#Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=l, cmap=plt.cm.Paired)
So I wanted to see if I could make fractal flames using matplotlib and figured a good test would be the sierpinski triangle. I modified a working version I had that simply performed the chaos game by normalizing the x range from -2, 2 to 0, 400 and the y range from 0, 2 to 0, 200. I also truncated the x and y coordinates to 2 decimal places and multiplied by 100 so that the coordinates could be put in to a matrix that I could apply a color map to. Here's the code I'm working on right now (please forgive the messiness):
import numpy as np
import matplotlib.pyplot as plt
import math
import random
def f(x, y, n):
N = np.array([[x, y]])
M = np.array([[1/2.0, 0], [0, 1/2.0]])
b = np.array([[.5], [0]])
b2 = np.array([[0], [.5]])
if n == 0:
return np.dot(M, N.T)
elif n == 1:
return np.dot(M, N.T) + 2*b
elif n == 2:
return np.dot(M, N.T) + 2*b2
elif n == 3:
return np.dot(M, N.T) - 2*b
def norm_x(n, minX_1, maxX_1, minX_2, maxX_2):
rng = maxX_1 - minX_1
n = (n - minX_1) / rng
rng_2 = maxX_2 - minX_2
n = (n * rng_2) + minX_2
return n
def norm_y(n, minY_1, maxY_1, minY_2, maxY_2):
rng = maxY_1 - minY_1
n = (n - minY_1) / rng
rng_2 = maxY_2 - minY_2
n = (n * rng_2) + minY_2
return n
# Plot ranges
x_min, x_max = -2.0, 2.0
y_min, y_max = 0, 2.0
# Even intervals for points to compute orbits of
x_range = np.arange(x_min, x_max, (x_max - x_min) / 400.0)
y_range = np.arange(y_min, y_max, (y_max - y_min) / 200.0)
mat = np.zeros((len(x_range) + 1, len(y_range) + 1))
random.seed()
x = 1
y = 1
for i in range(0, 100000):
n = random.randint(0, 3)
V = f(x, y, n)
x = V.item(0)
y = V.item(1)
mat[norm_x(x, -2, 2, 0, 400), norm_y(y, 0, 2, 0, 200)] += 50
plt.xlabel('x0')
plt.ylabel('y')
fig = plt.figure(figsize=(10,10))
plt.imshow(mat, cmap="spectral", extent=[-2,2, 0, 2])
plt.show()
The mathematics seem solid here so I suspect something weird is going on with how I'm handling where things should go into the 'mat' matrix and how the values in there correspond to the colormap.
If I understood your problem correctly, you need to transpose your matrix using the method .T. So just replace
fig = plt.figure(figsize=(10,10))
plt.imshow(mat, cmap="spectral", extent=[-2,2, 0, 2])
plt.show()
by
fig = plt.figure(figsize=(10,10))
ax = gca()
ax.imshow(mat.T, cmap="spectral", extent=[-2,2, 0, 2], origin="bottom")
plt.show()
The argument origin=bottom tells to imshow to have the origin of your matrix at the bottom of the figure.
Hope it helps.
I have 2 ndarrays with 3 dimensions. I need to calculate the Rsquared over these ndarrays. To clarify.
Array1.shape = Array2.shape = (100, 100, 10)
So...
resultArray = np.ones(100*100).reshape(100,100)
for i in range(Array1.shape[0]:
for j in range(Array1.shape[1]:
slope, intercept, r_value, p_value, std_err = scipy.stats.stats.linregress(Array1[i:i+1,j:j+1,:],Array1[i:i+1,j:j+1,:])
R2 = r_value**2
result[ i , j ] = R2
If passed two arrays, stats.linregress expects the two arrays to be 1-dimensional.
Array1[i:i+1,j:j+1,:] has shape (1, 1, 10), so it is 3-dimensional. So instead use Array1[i, j, :]:
import numpy as np
import scipy.stats as stats
Array1 = np.random.random((100, 100, 10))
Array2 = np.random.random((100, 100, 10))
resultArray = np.ones(100*100).reshape(100,100)
for i in range(Array1.shape[0]):
for j in range(Array1.shape[1]):
slope, intercept, r_value, p_value, std_err = stats.linregress(
Array1[i, j, :],
Array1[i, j, :])
R2 = r_value**2
resultArray[ i , j ] = R2
print(resultArray)