Updating elements for Soduku Puzzle - list

I'm a bit lost at this point. If anyone has some spare time to kill please take a look at this and provide suggestions. Been trying for awhile now to figure this out.
Im having trouble updating the number in the tiles after mouse clicks. Posting the entire code below since its mostly interrelated. Tried to narrow the error down but im not 100 percent positive since I just started learning tkinter. I can get the first square to update by changing the 2-D list to a new create_text method of the canvas problem with this is it leaves all of the previous numbers and id have to do this for the entire 9x9 grid (this is the last line before the else statement in handle_clicks)
There has to be an easier way of updating numbers?
def handle_clicks(self, event):
DX, DY = 100, 100
xclick = self.canvas.canvasx(event.x)
yclick = self.canvas.canvasy(event.y)
if (xclick > BORDER_WIDTH and xclick < BORDER_WIDTH + DX and
yclick > BORDER_WIDTH and yclick < BORDER_WIDTH + DX):
if self.final_list[0][0] < str(9):
val = self.final_list[0][0]
val = int(val)
val += 1
self.final_list[0][0] = val
new_val = self.final_list[0][0]
new_val = str(new_val)
self.final_list[0][0] = new_val
self.id_list[0][0] = self.canvas.create_text(xclick,
yclick,
fill = 'yellow',
text = '%s' %
new_val)
print(self.id_list)
else:
self.seed_value[0][0] -= 1
Full code: http://pastebin.com/2FwaMrdd

There has to be an easier way of updating numbers?
There is, you can call Canvas.itemconfigure to configure the text of the text object on the canvas. In your case, this would be:
self.canvas.itemconfigure(self.id_list[0][1], text=new_val)

Related

Turn data table into a matrix

I'm really desperate since I can't create a matrix using a given code. I am not allowed to use numpy or any other imported libraries.
Here's my code, which I will be translating since it's in Spanish so I'm really sorry if I miss a word or two:
start = float(input("First value of time: "))
incremento = float(input("Increase: "))
final = float(input("Final time: "))
max_height = 0.0
max_time = 0.0
print ("Tiempo\t Altitud(m)\t Velocidad(m/s)\t")
time = final
while (time <= final and time <= 48):
height= -0.12*time**4+12*time**3-380*time**2+4100*time+220
speed= -0.48*time**3+36*time**2-760*time+4100
speed/= 3600
print ("%.2f\t %.2f\t %.2f\t" %(time, height, speed))
if height> max_height:
max_heigt= height
max_time = time
time+= incremento
print ("Maximum height is %.2f m in time %.2f." %(max_height, max_time ))
I'm supposed to create a matrix from the information printed as a table.
Python doesn't have a builtin matrix, so you'd want to store the information in a list of lists.
Before the while loop add table = [].
Inside the while loop, add table.append([time, height, speed]).

How calculate fiber length

I need help calculating fiber length. I found all the coordinate values of center line of fiber by using regional maximal of euclidean distance. Here is the image that I got after applying regional maximal of euclidean distance. Now I want to draw a line on each fiber by using these points how can I do that so that I can extract each fiber length automatically. I tried to do it by using spline curve fitting. But the problem is I was not able to initiate the starting and ending point of fiber. How can I calculate each fiber length?
close all;
clear all;
clc
ima=imread('ecm61.png');
ima=bwareaopen(ima,50);
[rowsInImage,columnImage]=size(ima);
skel= bwmorph(ima,'skel',Inf);
figure
imshow(skel)
B = bwmorph(skel, 'branchpoints');
E = bwmorph(skel, 'endpoints');
[x,y] = find(E);
%plot(x,y,'+')
B_loc = find(B);
Dmask = false(size(skel));
for k = 1:numel(y)
D = bwdistgeodesic(skel,y(k),x(k));
distanceToBranchPt = min(D(B_loc));
Dmask(D < distanceToBranchPt) =true;
end
skelD = skel - Dmask;
figure
imshow(skelD);
hold all;
[x,y] = find(B); plot(y,x,'ro')
numberOfEndpoints=length(y);
% Label the image. Gives each separate segment a unique ID label number.
[labeledImage, numberOfSegments] = bwlabel(skelD);
fprintf('There are %d endpoints on %d segments.\n', numberOfEndpoints, numberOfSegments);
% Get the label numbers (segment numbers) of every endpoint.
for k = 1 : numberOfEndpoints
thisRow = x(k);
thisColumn = y(k);
%line([endPointRows(k),endPointColumns(k)],[endPointRows(k+1),endPointColumns(k+1)])
% Get the label number of this segment
theLabels(k) = labeledImage(thisRow, thisColumn);
fprintf('Endpoint #%d at (%d, %d) is in segment #%d.\n', k, thisRow, thisColumn, theLabels(k));
end
% For each endpoint, find the closest other endpoint
% that is not in the same segment
for k = 1 : numberOfEndpoints
thisRow = x(k);
thisColumn =y(k);
% Get the label number of this segment
thisLabel = theLabels(k);
otherEndpointIndexes = setdiff(1:numberOfEndpoints, k);
%if mustBeDifferent
% If they want to consider joining only end points that reside on different segments
% then we need to remove the end points on the same segment from the "other" list.
% Get the label numbers of the other end points.
%otherLabels = theLabels(otherEndpointIndexes);
%onSameSegment = (otherLabels == thisLabel); % List of what segments are the same as this segment
%otherEndpointIndexes(onSameSegment) = []; % Remove if on the same segment
%end
% Now get a list of only those end points that are on a different segment.
otherCols = y(otherEndpointIndexes);
otherRows = x(otherEndpointIndexes);
% Compute distances
distances = sqrt((thisColumn - otherCols).^2 + (thisRow - otherRows).^2);
% Find the min
[minDistance, indexOfMin] = min(distances);
nearestX = otherCols(indexOfMin);
nearestY = otherRows(indexOfMin);
%if minDistance < longestGapToClose;
if minDistance < rowsInImage
% Draw line from this endpoint to the other endpoint.
line([thisColumn, nearestX], [thisRow, nearestY], 'Color', 'g', 'LineWidth', 2);
fprintf('Drawing line #%d, %.1f pixels long, from (%d, %d) on segment #%d to (%d, %d) on segment #%d.\n', ...
k, minDistance, thisColumn, thisRow, theLabels(k), nearestX, nearestY, theLabels(indexOfMin));
end
end
title('Endpoints Linked by Green Lines', 'FontSize', 12, 'Interpreter', 'None');
after using edge linking
I would do this:
Skeletonize
Pruning
Find the different path at each intersection. It will give you the different segments, and you can reconnect them using the orientation.

How to filter given width of lines in a image?

I need to filter given width of lines in a image.
I am coding a program which will detect lines of road image. And I found something like that but can't understand logic of it. My function has to do that:
I will send image and width of line in terms of pixel size(e.g 30 pixel width), the function will filter just these lines in image.
I found that code:
void filterWidth(Mat image, int tau) // tau=width of line I want to filter
int aux = 0;
for (int j = 0; j < quad.rows; ++j)
{
unsigned char *ptRowSrc = quad.ptr<uchar>(j);
unsigned char *ptRowDst = quadDst.ptr<uchar>(j);
for (int i = tau; i < quad.cols - tau; ++i)
{
if (ptRowSrc[i] != 0)
{
aux = 2 * ptRowSrc[i];
aux += -ptRowSrc[i - tau];
aux += -ptRowSrc[i + tau];
aux += -abs((int)(ptRowSrc[i - tau] - ptRowSrc[i + tau]));
aux = (aux < 0) ? (0) : (aux);
aux = (aux > 255) ? (255) : (aux);
ptRowDst[i] = (unsigned char)aux;
}
}
}
What is the mathematical explanation of that code? And how does that work?
Read up about convolution filters. This code is a particular case of a 1 dimensional convolution filter (it only convolves with other pixels on the currently processed line).
The value of aux is started with 2 * the current pixel value, then pixels on either side of it at distance tau are being subtracted from that value. Next the absolute difference of those two pixels is also subtracted from it. Finally it is capped to the range 0...255 before being stored in the output image.
If you have an image:
0011100
This convolution will cause the centre 1 to gain the value:
2 * 1
- 0
- 0
- abs(0 - 0)
= 2
The first '1' will become:
2 * 1
- 0
- 1
- abs(0 - 1)
= 0
And so will the third '1' (it's a mirror image).
And of course the 0 values will always stay zero or become negative, which will be capped back to 0.
This is a rather weird filter. It takes the pixel values three by three on the same line, with a tau spacing. Let these values by Vl, V and Vr.
The filter computes - Vl + 2 V - Vr, which can be seen as a second derivative, and deducts |Vl - Vr|, which can be seen as a first derivative (also called gradient). The second derivative gives a maximum response in case of a maximum configuration (Vl < V > Vr); the first derivative gives a minimum response in case of a symmetric configuration (Vl = Vr).
So the global filter will give a maximum response for a symmetric maximum (like with a light road on a dark background, vertical, with a width less than 2.tau).
By rearranging the terms, you can see that the filter also yields the smallest of the left and right gradients, V - Vm and V - Vp (clamped to zero).

Return progress status when drawing a large NetworkX graph

I have a large graph that I'm drawing that is taking a long time to
process.
Is it possible to return a status, current_node, or percentage of the current status of the drawing?
I'm not looking to incrementally draw the network as all I'm doing it is saving it to a high dpi image.
Here's an example of the code I'm using:
path = nx.shortest_path(G, source=u'1234', target=u'98765')
path_edges = zip(path, path[1:])
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G,pos,nodelist=path,node_color='r')
nx.draw_networkx_edges(G,pos,edgelist=path_edges,edge_color='r',width=10)
plt.axis('equal')
plt.savefig('prototype_map.png', dpi=1000)
plt.show()
I believe the only way to do it is to accommodate the source code of draw function to print something saying 10%, 20% complete.... But when I checked the source code of draw_networkx_nodes & draw_networkx, I realized that it is not a straight forward task as the draw function stores the positions (nodes and edges) in a numpy array, send it to the ax.scatter function of matplotlib (sourcecode) which is a bit hard to manipulate without messing something up. The only thing I can think of is to change:
xy = numpy.asarray([pos[v] for v in nodelist]) # In draw_networkx_nodes function
To
xy = []
count = 0
for v in nodelist:
xy.append(pos[v])
count +=1
if (count == len(nodelist)):
print '50% of nodes completed'
print '100% of nodes completed'
xy = numpy.asarray(xy)
Similarly when draw_network_edges is called, to indicate progress in edges drawing. I am not sure how far from truth this will be because I do not know how much time is spent in the ax.scatter function. I also, looked in the source code of the scatter function but I could not pin point a loop or something to print an indication that some progress has been done.
Some layout functions accept pos argument to conduct incremental work. We can use this fact to split the computation into chunks and draw a progress bar using tqdm
def plot_graph(g, iterations=50, pos=None, k_numerator=None, figsize=(10, 10)):
if k_numerator is None:
k = None
else:
k = k_numerator / np.sqrt(g.number_of_nodes())
with tqdm(total=iterations) as pbar:
step = 5
iterations_done = 0
while iterations_done < iterations:
pos = nx.layout.fruchterman_reingold_layout(
g, iterations=step, pos=pos, k=k
)
iterations_done += step
pbar.update(step)
fig = plt.figure(figsize=figsize, dpi=120)
nx.draw_networkx(
g,
pos,
)
return fig, pos

super mysterious logic error for a planar fitting image processing algorithm

so i have this image processing program where i am using a linear regression algorithm to find a plane that best fits all of the points (x,y,z: z being the pixel color intensity (0-255)
Simply speaking i have this picture of ? x ? dimension. I run this algorithm and i get these A, B, C values. (3 float values)
then i go every pixel in the program and minus the pixel value with mod_val where
mod_val = (-A * x -B * y ) / C
A,B,C are constants while x,y is the pixel location in a x,y plane.
When the dimension of the picture is divisible by 100 its perfect but when its not the picture fractures. The picture itself is the same as the original but there is a diagonal line with color contrast that goes across the picture. The program is supposed to make the pixel color uniform from the center.
I tried running the picture where mod_val = 0 for not divisble by 100 dimension pictures and it copies a new picture perfectly. So i doubt there is a problem with storing and writing the read data in terms of alignment. (fyi this picture is a grey scale 8 bit.bmp)
I have tried changing the A,B,C values but the diagonal remains the same. The color of the image fragments within the diagonals change.
when i run 1400 x 1100 picture it works perfectly with the mod_val equation written above which is the most baffling part.
I spent a lot of time looking for rounding errors. They are virtually all floats. The dimension i used for breaking picture is 1490 x 1170.
here is a gragment of the code where i think a error is occuring:
int img_row = row_length;
int img_col = col_length;
int i = 0;
float *pfAmultX = new float[img_row];
for (int x = 0; x < img_row; x++)
{
pfAmultX[x] = (A * x)/C;
}
for (int y = 0; y < img_col; y++)
{
float BmultY = B*y/C;
for (int x = 0; x < img_row; x++, i++)
{
modify_val = pfAmultX[x] + BmultY;
int temp = (int) data.data[i];
data.data[i] += (unsigned char) modify_val;
if(temp >= 250){
data.data[i] = 255;
}
else if(temp < 0){
data.data[i] = 0;
}
}
}
delete[] pfAmultX;
The img_row, img_col is correct according to VS debugger mode
Any help would be greatly appreciated. I've been trying to find this bug for many hours now and my boss is telling me that i can't go back home until i find this bug.....
before algorithm (1400 x 1100, works)
after
before (1490 x 1170, demonstrates the problem)
after
UPDATE:
well i have boiled down the problem as something with the x coordinate after extensive testing.
This is because when i use large A or B values or both (C value is always ~.999) for 1400x1100 it does not create diagonals.
However, for the other image, large B values do not create diagonals but a fairly small - avg A value creates diagonals.
Whats even more, when i test a picture where x is disivible by 100 but y is divisible by 10. the answer is correct.
well in the end i found the solution. It was a problem due to the padding the the bitmap. When the dimension on the x was not divisible by 4 it would use padding which would throw off all of the x coordinates. This also meant that the row_value i received from the bmp header was the same as the dimension but not really the same in reality. I had to make a edit where i had to do: 4 * (row_value_from_bmp_header + 3)/ 4.