I'm trying to do edge detection with SimpleCV on a RasPi by first finding all the lines in an image and then filtering items set based on location, intersect angle and color. I have the filtering figured out, but am having difficulty displaying the image with the filtered lines drawn in.
Currently I am can draw the full line set with
handle_lin = my_lines_full.draw()
handle_img = some_image.show()
and the filtered line set independently with
handle_lin = my_lines_filtered.draw()
handle_img = some_image.show()
but since this method also displays the full line set, no difference is seen when I do them in the same script. Whats the best way to erase the layer that stores the line drawings or selectively remove elements of the drawing?
Sovled(-ish):
Seems as though the some_lines.draw() command toggles line sets so by repeating the .draw() command before updating the lines set I can clear the layer that is displayed on the image.
Related
I am using Yolact https://github.com/dbolya/yolact ,an instance segmentation algorithm which outputs the test image with a mask on the detected object. As the input images are given with the coordinates of polygons around the input classes in the annotations.json, I want to get an output like this. But I can't figure out how to extract the coordinates of those contours/polygons.
As far as I understood from this script https://github.com/dbolya/yolact/blob/master/eval.py the output is list of tensors for detected objects. It contains classes, scores, boxes and mask for evaluated image. The eval.py script returns recognized image with all this information. Recognition is saved in 'preds' in evalimg function (line 595), and post-processing of predict result is in the "def prep_display" (line 135)
Now how do I extract those polygon coordinates and save it in .JSON file or whatever else?
I also tried to look at these but couldn't figure out sadly!
https://github.com/dbolya/yolact/issues/286
and
https://github.com/dbolya/yolact/issues/256
You need to create a complete post-processing pipeline that is specific to your task. Here's small pseudocode that could be added to the prep_disply() in eval.py
with timer.env('Copy'):
if cfg.eval_mask_branch:
# Add the below line to get all the predicted objects as a list
all_objects_mask = t[3][:args.top_k]
# Convert each object mask to binary and then
# Use OpenCV's findContours() method to extract the contour points for each object
I have a CGAL surface_mesh of triangles with some self-intersecting triangles which I'm trying to remove to create a continuous 2-manifold shell, ultimately for printing.
I've attempted to use remove_self_intersection() and autorefine_and_remove_self_intersections() from this answer. The first only removes a few self-intersections while the second completely removes my mesh.
So, I'm trying my own approach - I'm finding the self-intersections and then attempting to delete them. I've tried using the low level remove_face but the borders are not detectable afterwards so I'm unable to fill the resulting holes. This answer refers to using the higher level Euler remove_face but this method, and make_hole seem to discard my mesh entirely.
Here is an extract (I'm using break to see if I can get at least one triangle removed, and I'm just trying with the first of the pair):
vector<pair<face_descriptor, face_descriptor> > intersected_tris;
PMP::self_intersections(mesh, back_inserter(intersected_tris));
for (pair<face_descriptor, face_descriptor> &p : intersected_tris) {
CGAL::Euler::remove_face(mesh.halfedge(get<0>(p)), mesh);
break;
}
My approach to removing self-intersecting triangles is to aggressively delete the intersecting faces, along with nearby faces and fill the resulting holes. Thanks to #sloriot 's comment I realised that the Euler::remove_face function was failing due to duplicate faces in the set returned from both the self_intersections and expand_face_selection functions.
A quick way to remove duplicate faces from the vector result of those two functions is:
std::set<face_descriptor> s(selected_faces.begin(), selected_faces.end());
selected_faces.assign(s.begin(), s.end());
This code converts the vector of faces into a set (sets contain no duplicates) and then converting the set back again.
Once the duplicates were removed, the Euler::remove_face function worked correctly, including updating the borders so that the triangulate_hole function could be used on the result producing a final surface with no self-intersections.
Just getting into matplot lib and running into odd problem - I'm trying to plot 10 items, and use their names on the x-axis. I followed this suggestion and it worked great, except that my label names are long and they were all scrunched up. So I found that you can rotate labels, and got the following:
plt.plot([x for x in range(len(df.columns))], df[df.columns[0]], 'ro',)
plt.xticks(range(10), df.columns, rotation=45)
The labels all seem to be off by a tick ("Arthrobacter" should be aligned with 0). So I thought my indexing was wrong, and tried a bunch of other crap to fix it, but it turns out it's just odd (at least to me) behavior of the rotation. If I do rotation='vertical', I get what I want:
I see now that the center of the labels are clearly aligned with the ticks, but I expected that they'd terminate on the ticks. Like this (done in photoshop):
Is there a way to get this done automatically?
The labels are not "off", labels are actually placed via their "center". In your second image, the corresponding tick is above the center of the label, not above its endpoint. You can change that by adding ha='right' which modifies the horizontal alignement of the label.
plt.plot([x for x in range(len(df.columns))], df[df.columns[0]], 'ro',)
plt.xticks(range(10), df.columns, rotation=45, ha='right')
See the comparison below :
1)
plt.plot(np.arange(4), np.arange(4))
plt.xticks(np.arange(4), ['veryverylongname']*4, rotation=45)
plt.tight_layout()
2)
plt.plot(np.arange(4), np.arange(4))
plt.xticks(np.arange(4), ['veryverylongname']*4, rotation=45, ha='right')
plt.tight_layout()
I want to merge 2 images. How can i remove the same area between 2 images?
Can you tell me an algorithm to solve this problem. Thanks.
Two image are screenshoot image. They have the same width and image 1 always above image 2.
When two images have the same width and there is no X-offset at the left side this shouldn't be too difficult.
You should create two vectors of integer and store the CRC of each pixel row in the corresponding vector element. After doing this for both pictures you find the CRC of the first line of the lower image in the first vector. This is the offset in the upper picture. Then you check that all following CRCs from both pictures are identical. If not, you have to look up the next occurrence of the initial CRC in the upper image again.
After checking that the CRCs between both pictures are identical when you apply the offset you can use the bitblit function of your graphics format and build the composite picture.
I haven't come across something similar before but I think the following might work:
Convert both to grey-scale.
Enhance the contrast, the grey box might become white for example and the text would become more black. (This is just to increase the confidence in the next step)
Apply some threshold, converting the pictures to black and white.
afterwards, you could find the similar areas (and thus the offset of overlap) with a good degree of confidence. To find the similar parts, you could harper's method (which is good but I don't know how reliable it would be without the said filtering), or you could apply some DSP operation(s) like convolution.
Hope that helps.
If your images are same width and image 1 is always on top. I don't see how that hard could it be..
Just store the bytes of the last line of image 1.
from the first line to the last of the image 2, make this test :
If the current line of image 2 is not equal to the last line of image 1 -> continue
else -> break the loop
you have to define a new byte container for your new image :
Just store all the lines of image 1 + all the lines of image 2 that start at (the found line + 1).
What would make you sweat here is finding the libraries to manipulate all these data structures. But after a few linkage and documentation digging, you should be able to easily implement that.
Is there a way to get the number of lines currently on a matplotlib plot? I find myself setting colors in a colormap using a counter and multiplier to step through the color values--which seems rather un-pythonic.
All the Line2D objects in an axes are stored into a list
ax.lines
If you use only simple line plots, the lenght of the above list is enough.
If you use plt.errorbar the situation is a bit more complicated, as it creates multiple Line2D objects (central lines, vertical and horizontal error bars and their caps).
If you want to automatise the colours to assign to lines you can create a cycle like this
import itertools as it
colors = it.cycle(list of colors)
and then call the next color with colors.next() and restart from the first after it gets to the last