I have a semi circle shaped image that I am trying to make into a 120 degree fan shape. The image needs to be squished together between the two edges coming closer together.
After a lot of searching I've tried both affine and perspective transforms. Neither seemed to give me the results I was looking for.
I am using OpenCV2 and C++
How can I achieve this effect?
Edit:
Currently I map a rectangular image onto a semi circle. So, it would also be acceptable to map directly onto a 120 degree fan shape. Would that be a better approach? and if so how might I accomplish that?
Related
I'm still trying to density control (grade) meshes in CGAL. Specifically tet-meshing a polygon surface (or multiple surface manifolds) that I simply load as OFF files. I can also load lists of selected faces or face nodes too.
But I can't seem to get to first base on this with the polygon tet-mesher. All I want to do is assign and enforce a mesh density/size at selected faces in the OFF file.
I CAN get some kinds of mesh density working by inserting 1-D features with volumetric data meshing, but for CAD and 3D printing purposes it has to be computed from an STL-like triangular surface manifold, so volume-based meshing is not do-able.
Is what I'm trying to do even possible in CGAL? It feels to me like it must be, and I'm just missing something obvious.
I really hope someone can help here. FYI i'm mostly working with the Mesh3 example using v4.14.
Thanks very much.
Look at the Mesh_facet_criteria and in particular this constructor where SizingField is where you can control the size. For locating the point wrt a face, you can use the AABB-tree function closest_point_and_primitive().
I have an image with various odd shapes (such as circles and squares) which are coloured pure red (rgb(255, 0, 0) exactly). I want to draw boxes around these shapes, but to do that I need the coordinates of each corner from each box. This is the part I am having difficulty with.
I basically want to go from this:
To this:
I have tried many different ways to achieve this, including parsing the y-axis until I find a shape and boxing it that way, starting from the very corners of the image and moving towards the middle (both methods of which don't work well for multiple shapes) and using external packages such as an OpenCV.
I could use OpenCV to achieve this, but given the constraints I was hoping there was a way to do it which doesn't require an external package.
Can anyone with a bit more machine vision experience point me in the right direction please?
First, use the Hoshen-Kopelman algorithm to determine the connected clusters of pixels with the given criteria (being red), then all you have to do is find their min/max regions (on x and y axes) to wrap them with a rectangle.
I am trying to detect a ball in an filtered image.
In this image I've already removed the stuff that can't be part of the object.
Of course I tried the HoughCircle function, but I did not get the expected output.
Either it didn't find the ball or there were too many circles detected.
The problem is that the ball isn't completly round.
Screenshots:
I had the idea that it could work, if I identify single objects, calculate their center and check whether the radius is about the same in different directions.
But it would be nice if it detect the ball also if he isn't completely visible.
And with that method I can't detect semi-circles or something like that.
EDIT: These images are from a video stream (real time).
What other method could I try?
Looks like you've used difference imaging or something similar to obtain the images you have..? Instead of looking for circles, look for a more generic loop. Suggestions:
Separate all connected components.
For every connected component -
Walk around the contour and collect all contour pixels in a list
Suggestion 1: Use least squares to fit an ellipse to the contour points
Suggestion 2: Study the curvature of every contour pixel and check if it fits a circle or ellipse. This check may be done by computing a histogram of edge orientations for the contour pixels, or by checking the gradients of orienations from contour pixel to contour pixel. In the second case, for a circle or ellipse, the gradients should be almost uniform (ask me if this isn't very clear).
Apply constraints on perimeter, area, lengths of major and minor axes, etc. of the ellipse or loop. Collect these properties as features.
You can either use hard-coded heuristics/thresholds to classify a set of features as ball/non-ball, or use a machine learning algorithm. I would first keep it simple and simply use thresholds obtained after studying some images.
Hope this helps.
I have written an algorithm to process a camera capture and extract a binary image of two features I'm interested in. I'm trying to find the best (fastest) way of detecting when the two features intersect and where the lowest (y coordinate is greatest) point is (this will be the intersection).
I do not want to use a findContours() based method as this is too slow and, in my opinion, unnecessary. I also think blob detection libraries are too bloated for this.
I have two sample images (sorry for low quality):
(not touching: http://i.imgur.com/7bQ9qMo.jpg)
(touching: http://i.imgur.com/tuSmKw7.jpg)
Due to the way these images are created, there is often noise in the top right corner which looks like pixelated lines but methods such as dilation and erosion lose resolution around the features I'm trying to find.
My initial thought would be to use direct pixel access to form a width filter and a height filter. The lowest point in the image is therefore the intersection.
I have no idea how to detect when they touch... logically I can see that a triangle is formed when they intersect and otherwise there is no enclosed black area. Can I fill the image starting from the corner with say, red, and then calculate how much of the image is still black?
Does anyone have any suggestions?
Thanks
Your suggestion is a way more slow than finding contours. For binary images, finding contour is very easy and quick because you just need to find a black pixel followed by a white pixel or vice versa.
Anyway, if you don't want to use it, you can use the vertical projection or vertical profile you will see it the objects intersect or not.
For example, in the following image check the the letter "n" which is little similar to non-intersecting object, and the letter "o" which is similar to intersecting objects :
By analyzing the histograms you can recognize which one is intersecting or not.
UPDATE:
I found that, http://os.ivrpa.org/panosalado/wiki , has an implementation in java. Anyone who has something similar in c or c++?
I have this panorama, an spherical map from google streetview, and want to map this on a sphere/cube. Below are some examples and illustrations, what i seek is a library that can do it, or some implementation guides.
I tried http://krpano.com/docu/tutorials/quickstart/#top that gives the results listed at the bottom. It illustrates what i want, but the rotation axis is off. I need to create the views of direct ahead and back, left and right. Ideal i would like to map it to the sphere and tell it what angles to extract (the orientation of the cube).
[Back,Down,Front,Left,Right,Up]
You could do this easily in POV-Ray putting the camera in the middle of a sphere mapped with your texture. See image_map map_type 1 and e.g this example.
But really this is very easy to implement yourself, assuming the input images are some sort of cylindrical equidistant or equirectangular projection: for each (x,y) in the output image you are rendering, just use the inverse formulas to compute a (longitude,latitude) in the input image and interpolate/copy over a pixel value.