I have a big (600,600,600) numpy array filled with my data. Now I would like to extract regions from this with a given width around an arbitrary line through the box.
For the line I have the x, y and z coordinates of every point in separate numpy arrays. So let's say the line has 35 points in the data box, then the x, y and z arrays each have lengths of 35 as well. I can extract the points along the line itself by using indexing like this
extraction = data[z,y,x]
Now ideally I'd like to extract a box around it by doing something like the following
extraction = data[z-3:z+3,y-3:y+3,z-3:z+3]
but because x, y and z are arrays, this is not possible. The only way I could think of of doing this is through a for-loop for each point, so
extraction = np.array([])
for i in range(len(x)):
extraction = np.append(extraction,data[z[i]-3:z[i]+3,y[i]-3:y[i]+3,z[i]-3:z[i]+3])
and then reshaping the extraction array afterwards. However, this is very slow and there will be some overlap between each of the slices in this for-loop I'd like to prevent.
Is there a simple way to do this directly without a for-loop?
EDIT:
Let me rephrase the question through another idea I came up with that is also slow. I have a line running through the datacube. I have a lists of x, y and z coordinates (the coordinates being the indices in the datacube array) with all the points that define the line.
As an example these lists look like this:
x_line: [345 345 345 345 342 342 342 342 342 342 342 342 342 342 342 342]
y_line: [540 540 540 543 543 543 543 546 546 546 549 549 549 549 552 552]
z_line: [84 84 84 87 87 87 87 87 90 90 90 90 90 93 93 93]
As you can see, some of these coordinates are identical, due to the lines being defined in different coordinates and then binned to the resolution of the data box.
Now I want to mask all cells in the datacube with a distance larger than 3 cells.
For a single point along the line (x_line[i], y_line[i], z_line[i]) this is relatively easy.I created a meshgrid for the coordinates in the datacube and then create a mask array of zeros and put everything satisfying the condition to 1:
data = np.random.rand(600,600,600)
x_box,y_box,z_box = np.meshgrid(n.arange(600),n.arange(600),n.arange(600))
mask = np.zeros(np.shape(data))
for i in range(len(x_line)):
distance = np.sqrt((x_box-x_line[i])**2 + (y_box-y_line[i])**2 + (z_box-z_line[i])**2)
mask[distance <= 3] = 1.
extraction = data[mask == 1.]
The advantage of this is that the mask array removes the problem of having duplicate extractions. However, both the meshgrid and distance calculations are very slow. So is it possible to do the calculation of the distance directly on the entire line without having to do a for-loop over each line point, so that it directly masks all cells that are within a distance of 3 cells from ANY of the line points?
How about this?
# .shape = (N,)
x, y, z = ...
# offsets in [-3, 3), .shape = (6, 6, 6)
xo, yo, zo = np.indices((6, 6, 6)) - 3
# box indices, .shape = (6, 6, 6, N)
xb, yb, zb = x + xo[...,np.newaxis], y + yo[...,np.newaxis], z + zo[...,np.newaxis]
# .shape = (6, 6, 6, N)
extractions = data[xb, yb, zb]
This extracts a series of 6x6x6 cubes, each "centered" on the coordinates in x, y, and z
This will produce duplicate coordinates, and fail on cases near the borders
If you keep your xyz in one array, this gets a little less verbose, and you can remove the duplicates:
# .shape = (N,3)
xyz = ...
# offsets in [-3, 3), .shape = (6, 6, 6, 3)
xyz_offset = np.moveaxis(np.indices((6, 6, 6)) - 3, 0, -1)
# box indices, .shape = (6, 6, 6, N, 3)
xyz_box = xyz + xyz_offset[...,np.newaxis,:]
if remove_duplicates:
# shape (M, 3)
xyz_box = xyz_box.reshape(-1, 3)
xyz_box = np.unique(xyz_box, axis=0)
xb, yb, zb = xyz_box
extractions = data[xb, yb, zb]
Related
I have written C++ code to numerically solve a PDE. I would like to plot the result. I have outputted the data to an ascii file, as 3 columns of numbers. The x-coordinate, the y-coordinate and the z-coordinate. This might look like
0.01 7 -3
-12 1.2 -0.24
...
I often have in excess of 1000 data points. I want to plot a surface. I was able to load the data in both R and octave. In R scatterplot3D worked, and in octave plot3 worked. However, I wish to produce a surface, and not distinct points (scatterplot3d), or a curve (plot3). I am struggling to get mesh or surf to work from data in octave. I am looking for a simple way to plot a surface in 3D space with octave, R, C++ or any other program.
You could coerce the data into the correct format for plotting with the base R function persp. This requires a vector of unique x values, a vector of unique y values, and a matrix of z values which is a length(unique(x)) by length(unique(y)) matrix.
Suppose your data looks like this:
x <- y <- seq(-pi, pi, length = 20)
df <- expand.grid(x = x, y = y)
df$z <- cos(df$x) + sin(df$y)
head(df)
#> x y z
#> 1 -3.141593 -3.141593 -1.00000000
#> 2 -2.810899 -3.141593 -0.94581724
#> 3 -2.480205 -3.141593 -0.78914051
#> 4 -2.149511 -3.141593 -0.54694816
#> 5 -1.818817 -3.141593 -0.24548549
#> 6 -1.488123 -3.141593 0.08257935
Then you can create a matrix like this:
z <- tapply(df$z, list(df$x, df$y), mean)
So your plot would look like this:
persp(unique(df$x), unique(df$y), z,
col = "gold", theta = 45, shade = 0.75, ltheta = 90)
If your x and y co-ordinates are not nicely aligned, then a more general approach would be:
z <- tapply(df$z, list(cut(df$x, 20), cut(df$y, 20)), mean, na.rm = TRUE)
persp(as.numeric(factor(levels(cut(df$x, 20)), levels(cut(df$x, 20)))),
as.numeric(factor(levels(cut(df$y, 20)), levels(cut(df$y, 20)))),
z, col = "gold", theta = 45, shade = 0.75, ltheta = 90, xlab = "x",
ylab = "y")
I'd like to draw filled ellipse with python. This would be easy if I could use PIL oder some other libraries. The problem is I need the ellipse in a .dxf file format. Therefore I used the dxfwrite package. This allows me to draw an ellipse but I couldn't find a way to fill it with a solid color. The following code does draw an ellipse line, but does not fill it.
import dxfwrite
from dxfwrite import DXFEngine as dxf
name = 'ellipse.dxf'
dwg = dxf.drawing(name)
dwg.add(dxf.ellipse((0,0), 5., 10., segments=200))
dwg.save()
Does anybody of you guys know a solution?
The HATCH entity is not supported by dxfwrite, if you use ezdxf this is the solution:
import ezdxf
dwg = ezdxf.new('AC1015') # hatch requires the DXF R2000 (AC1015) format or later
msp = dwg.modelspace() # adding entities to the model space
# important: major axis >= minor axis (ratio <= 1.) else AutoCAD crashes
msp.add_ellipse((0, 0), major_axis=(0, 10), ratio=0.5)
hatch = msp.add_hatch(color=2)
with hatch.edit_boundary() as boundary: # edit boundary path (context manager)
edge_path = boundary.add_edge_path()
# an edge path can contain line, arc, ellipse or spline elements
edge_path.add_ellipse((0, 0), major_axis_vector=(0, 10), minor_axis_length=0.5)
# upcoming ezdxf 0.7.7:
# renamed major_axis_vector to major_axis
# renamed minor_axis_length to ratio
dwg.saveas("solid_hatch_ellipse.dxf")
You could fill an ellipse by using a solid hatch object:
For the above example, here is a snippet from the DXF file that contains the ellipse and the hatch:
AcDbEntity
8
0
100
AcDbEllipse
10
2472.192919
20
1311.37942
30
0.0
11
171.0698134145308
21
-27.61597470964863
31
0.0
210
0.0
220
0.0
230
1.0
40
0.2928953354556341
41
0.0
42
6.283185307179586
0
HATCH
5
5A
330
2
100
AcDbEntity
8
0
100
AcDbHatch
10
0.0
20
0.0
30
0.0
210
0.0
220
0.0
230
1.0
2
SOLID
70
1
71
1
91
1
92
5
93
1
72
3
10
2472.192919357234
20
1311.379420138197
11
171.0698134145308
21
-27.61597470964863
40
0.2928953354556341
50
0.0
51
360.0
73
1
97
1
330
59
75
1
76
1
47
0.794178
98
1
10
2428.34191358924
20
1317.777876434349
450
0
451
0
460
0.0
461
0.0
452
0
462
1.0
453
2
463
0.0
63
5
421
255
463
1.0
63
2
421
16776960
470
LINEAR
1001
GradientColor1ACI
1070
5
1001
GradientColor2ACI
1070
2
1001
ACAD
1010
0.0
1020
0.0
1030
0.0
There are a lot of DXF codes involved. This is the information Autodesk provide:
Hatch group codes
Group code
Description
100
Subclass marker (AcDbHatch)
10
Elevation point (in OCS)
DXF: X value = 0; APP: 3D point (X and Y always equal 0, Z represents the elevation)
20, 30
DXF: Y and Z values of elevation point (in OCS)
Y value = 0, Z represents the elevation
210
Extrusion direction (optional; default = 0, 0, 1)
DXF: X value; APP: 3D vector
220, 230
DXF: Y and Z values of extrusion direction
2
Hatch pattern name
70
Solid fill flag (solid fill = 1; pattern fill = 0); for MPolygon, the version of MPolygon
63
For MPolygon, pattern fill color as the ACI
71
Associativity flag (associative = 1; non-associative = 0); for MPolygon, solid-fill flag (has solid fill = 1; lacks solid fill = 0)
91
Number of boundary paths (loops)
varies
Boundary path data. Repeats number of times specified by code 91. See Boundary Path Data
75
Hatch style:
0 = Hatch “odd parity” area (Normal style)
1 = Hatch outermost area only (Outer style)
2 = Hatch through entire area (Ignore style)
76
Hatch pattern type:
0 = User-defined; 1 = Predefined; 2 = Custom
52
Hatch pattern angle (pattern fill only)
41
Hatch pattern scale or spacing (pattern fill only)
73
For MPolygon, boundary annotation flag (boundary is an annotated boundary = 1; boundary is not an annotated boundary = 0)
77
Hatch pattern double flag (pattern fill only):
0 = not double; 1 = double
78
Number of pattern definition lines
varies
Pattern line data. Repeats number of times specified by code 78. See Pattern Data
47
Pixel size used to determine the density to perform various intersection and ray casting operations in hatch pattern computation for associative hatches and hatches created with the Flood method of hatching
98
Number of seed points
11
For MPolygon, offset vector
99
For MPolygon, number of degenerate boundary paths (loops), where a degenerate boundary path is a border that is ignored by the hatch
10
Seed point (in OCS)
DXF: X value; APP: 2D point (multiple entries)
20
DXF: Y value of seed point (in OCS); (multiple entries)
450
Indicates solid hatch or gradient; if solid hatch, the values for the remaining codes are ignored but must be present. Optional; if code 450 is in the file, then the following codes must be in the file: 451, 452, 453, 460, 461, 462, and 470. If code 450 is not in the file, then the following codes must not be in the file: 451, 452, 453, 460, 461, 462, and 470
0 = Solid hatch
1 = Gradient
451
Zero is reserved for future use
452
Records how colors were defined and is used only by dialog code:
0 = Two-color gradient
1 = Single-color gradient
453
Number of colors:
0 = Solid hatch
2 = Gradient
460
Rotation angle in radians for gradients (default = 0, 0)
461
Gradient definition; corresponds to the Centered option on the Gradient Tab of the Boundary Hatch and Fill dialog box. Each gradient has two definitions, shifted and unshifted. A Shift value describes the blend of the two definitions that should be used. A value of 0.0 means only the unshifted version should be used, and a value of 1.0 means that only the shifted version should be used.
462
Color tint value used by dialog code (default = 0, 0; range is 0.0 to 1.0). The color tint value is a gradient color and controls the degree of tint in the dialog when the Hatch group code 452 is set to 1.
463
Reserved for future use:
0 = First value
1 = Second value
470
String (default = LINEAR)
I hope this may be of some use to you. I apologize if I missunderstood your issue.
I'm a bit of a beginner in Python, but I think I have a simple question. I am using image processing to detect lines in an image
lines = cv2.HoughLinesP(edges,1,np.pi/180,50,minLineLength,maxLineGap)
lines.shape is (151, 1, 4) meaning that I've detected 151 lines, and has 4 parameters x1, y1, x2, y2.
What I want to do is add another factor to lines, called slope, thus increasing lines.shape to (151, 1, 5). I know I can concatenate an empty array of zeros at the end of lines, but how do I make it so I can call it in a for loop or the like?
For example I want to be able to say
for slope in lines
#do stuff
Unfortunately, the HoughLinesP function returns a numpy array of type int32. I stayed up past my bedtime to figure this out, though, so I'm going to post it anyways. I just multiplied the slopes by 1000 and put them in the array like that. Hopefully, it's still useful to you.
slopes = []
for row in lines:
slopes.append((row[0][1] - row[0][3]) / float(row[0][0] - row[0][2]) * 1000)
new_column = []
for slope in slopes:
new_column.append([slope])
new_array = np.insert(lines, 4, new_column, axis=2)
print lines
print
print new_array
Sample output:
[[[14 66 24 66]]
[[37 23 54 56]]
[[ 7 62 28 21]]
[[70 61 81 61]]
[[24 64 42 64]]]
[[[ 14 66 24 66 0]]
[[ 37 23 54 56 1941]]
[[ 7 62 28 21 -1952]]
[[ 70 61 81 61 0]]
[[ 24 64 42 64 0]]]
Edit: Better (and full) code with same output
import cv2
import numpy as np
img = cv2.imread('cmake_logo-main.png')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(img,50,150,apertureSize = 3)
lines = cv2.HoughLinesP(edges,1,np.pi/180,50,3,10)
def slope(xy):
return (xy[1] - xy[3]) / float(xy[0] - xy[2]) * 1000
new_column = [[slope(row[0])] for row in lines]
new_array = np.insert(lines, 4, new_column, axis=2)
print lines
print
print new_array
Based on your comments, here's my guess as to what you should do:
lines = np.squeeze(lines)
# remove the unneeded middle dim, a convenience, but not required
slope = <some calculation> # expect (151,) array of floats
mask = np.ones((151,),dtype=bool) # boolean mask
<assign False to mask for all lines you want to delete>
<alt start with False, and set True to keepers>
lines = lines[mask]
slope = lines[mask]
Alternatively you could extend lines with np.hstack([lines, np.zeros((151,1))]) (or concatenate on axis 1). But if as Jason thinks, lines is dtype int, and slope must be float, that won't work. You'd have to use his scaling solution.
You could also use a structured array to combine the ints and float columns into one array. Why do that if it is just as easy to keep slope as separate variable?
In MATLAB,
I have the following data:
mass = [ 23 45 44]
velocity = [34 53 32]
time = [1 2 3]
acceleration = [32 22 12]
speed = [12 33 44]
What I'm trying to achieve is to apply uicontrol that creates two lists with this data (mass, velocity, time, acceleration, speed), and have the ability to click on one of the variables (mass) in each column and there is a numerical data output, like mass = 23 45 44
Output: numerical data stored in these variables
Here is the code:
function learnlists()
figure;
yourcell={'mass','velocity','time','acceleration','speed'}
hb = uicontrol('Style', 'listbox','Position',[100 100 200 200],...
'string',yourcell,'Callback',#measurements)
yourcell={'mass','velocity','time','acceleration','speed'}
hc = uicontrol('Style', 'listbox','Position',[300 100 200 200],...
'string',yourcell,'Callback',#measurements)
function [out] = measurements(hb,evnt)
outvalue = get(hb,'value');
v = get(hb,'value')
if v == 1
mass = [1 2 3 4 5]
elseif v == 2
velocity = [ 1 2 3 4 5]
end
end
end
Thanks,
Amanda
I suggest you to not use a function to keep things simpler and keep all the variables in your base workspace.
Here is an example for one list box:
mass = [ 23 45 44];
velocity = [34 53 32];
time = [1 2 3];
acceleration = [32 22 12];
speed = [12 33 44];
figure;
yourcell = {'mass','velocity','time','acceleration','speed'};
hb = uicontrol('Style', 'listbox','Position',[100 100 200 200],...
'string',yourcell,'Callback',...
['switch get(hb, ''Value''), ',...
'case 1, mass, ',...
'case 2, velocity, ',...
'case 3, time, ',...
'case 4, acceleration, ',...
'case 5, speed, ',...
'end']);
However this displays in command window, you could change the code to show it in a text box in your gui.
You can also execute a script as the Callback function.
hb = uicontrol('Style', 'listbox','Position',[100 100 200 200],...
'string',yourcell,'Callback', 'myScript');
and then create an m-script in your directory:
(myScript.m)
switch get(hb, 'Value')
case 1
mass
case 2
velocity
case 3
time
case 4
acceleration
case 5
speed
end
Note that everything is still in your base workspace.
Hope it helps.
At Wikipedia's Mandelbrot set page there are really beautiful generated images of the Mandelbrot set.
I also just implemented my own Mandelbrot algorithm. Given n is the number of iterations used to calculate each pixel, I color them pretty simple from black to green to white like that (with C++ and Qt 5.0):
QColor mapping(Qt::white);
if (n <= MAX_ITERATIONS){
double quotient = (double) n / (double) MAX_ITERATIONS;
double color = _clamp(0.f, 1.f, quotient);
if (quotient > 0.5) {
// Close to the mandelbrot set the color changes from green to white
mapping.setRgbF(color, 1.f, color);
}
else {
// Far away it changes from black to green
mapping.setRgbF(0.f, color, 0.f);
}
}
return mapping;
My result looks like that:
I like it pretty much already, but which color gradient is used for the images in Wikipedia? How to calculate that gradient with a given n of iterations?
(This question is not about smoothing.)
The gradient is probably from Ultra Fractal. It is defined by 5 control points:
Position = 0.0 Color = ( 0, 7, 100)
Position = 0.16 Color = ( 32, 107, 203)
Position = 0.42 Color = (237, 255, 255)
Position = 0.6425 Color = (255, 170, 0)
Position = 0.8575 Color = ( 0, 2, 0)
where Position is in range [0, 1) and Color is RGB in range [0, 255].
The catch is that the colors are not linearly interpolated. The interpolation of colors is likely cubic (or something similar). Following image shows the difference between linear and Monotone cubic interpolation:
As you can see the cubic interpolation results in smoother and "prettier" gradient. I used monotone cubic interpolation to avoid "overshooting" of the [0, 255] color range that can be caused by cubic interpolation. Monotone cubic ensures that interpolated values are always in the range of input points.
I use following code to compute the color based on iteration i:
double smoothed = Math.Log2(Math.Log2(re * re + im * im) / 2); // log_2(log_2(|p|))
int colorI = (int)(Math.Sqrt(i + 10 - smoothed) * gradient.Scale) % colors.Length;
Color color = colors[colorI];
where i is the diverged iteration number, re and im are diverged coordinates, gradient.Scale is 256, and the colors is and array with pre-computed gradient colors showed above. Its length is 2048 in this case.
Well, I did some reverse engineering on the colours used in wikipedia using the Photoshop eyedropper. There are 16 colours in this gradient:
R G B
66 30 15 # brown 3
25 7 26 # dark violett
9 1 47 # darkest blue
4 4 73 # blue 5
0 7 100 # blue 4
12 44 138 # blue 3
24 82 177 # blue 2
57 125 209 # blue 1
134 181 229 # blue 0
211 236 248 # lightest blue
241 233 191 # lightest yellow
248 201 95 # light yellow
255 170 0 # dirty yellow
204 128 0 # brown 0
153 87 0 # brown 1
106 52 3 # brown 2
Simply using a modulo and an QColor array allows me to iterate through all colours in the gradient:
if (n < MAX_ITERATIONS && n > 0) {
int i = n % 16;
QColor mapping[16];
mapping[0].setRgb(66, 30, 15);
mapping[1].setRgb(25, 7, 26);
mapping[2].setRgb(9, 1, 47);
mapping[3].setRgb(4, 4, 73);
mapping[4].setRgb(0, 7, 100);
mapping[5].setRgb(12, 44, 138);
mapping[6].setRgb(24, 82, 177);
mapping[7].setRgb(57, 125, 209);
mapping[8].setRgb(134, 181, 229);
mapping[9].setRgb(211, 236, 248);
mapping[10].setRgb(241, 233, 191);
mapping[11].setRgb(248, 201, 95);
mapping[12].setRgb(255, 170, 0);
mapping[13].setRgb(204, 128, 0);
mapping[14].setRgb(153, 87, 0);
mapping[15].setRgb(106, 52, 3);
return mapping[i];
}
else return Qt::black;
The result looks pretty much like what I was looking for:
:)
I believe they're the default colours in Ultra Fractal. The evaluation version comes with source for a lot of the parameters, and I think that includes that colour map (if you can't infer it from the screenshot on the front page) and possibly also the logic behind dynamically scaling that colour map appropriately for each scene.
This is an extension of NightElfik's great answer.
The python library Scipy has monotone cubic interpolation methods in version 1.5.2 with pchip_interpolate. I included the code I used to create my gradient below. I decided to include helper values less than 0 and larger than 1 to help the interpolation wrap from the end to the beginning (no sharp corners).
#set up the control points for your gradient
yR_observed = [0, 0,32,237, 255, 0, 0, 32]
yG_observed = [2, 7, 107, 255, 170, 2, 7, 107]
yB_observed = [0, 100, 203, 255, 0, 0, 100, 203]
x_observed = [-.1425, 0, .16, .42, .6425, .8575, 1, 1.16]
#Create the arrays with the interpolated values
x = np.linspace(min(x_observed), max(x_observed), num=1000)
yR = pchip_interpolate(x_observed, yR_observed, x)
yG = pchip_interpolate(x_observed, yG_observed, x)
yB = pchip_interpolate(x_observed, yB_observed, x)
#Convert them back to python lists
x = list(x)
yR = list(yR)
yG = list(yG)
yB = list(yB)
#Find the indexs where x crosses 0 and crosses 1 for slicing
start = 0
end = 0
for i in x:
if i > 0:
start = x.index(i)
break
for i in x:
if i > 1:
end = x.index(i)
break
#Slice away the helper data in the begining and end leaving just 0 to 1
x = x[start:end]
yR = yR[start:end]
yG = yG[start:end]
yB = yB[start:end]
#Plot the values if you want
#plt.plot(x, yR, color = "red")
#plt.plot(x, yG, color = "green")
#plt.plot(x, yB, color = "blue")
#plt.show()