Hopeful end result of detecting ellipse in photo and drawing edges accurately:
Picture I'm trying to work with:
I’m looking to detect the ellipses of an eye in a side view image but one problem is that when I run the canny function and draw the edges it can only draw edges on the eyelashes and make some sort of ellipse there. This noise is also causing problems because in my hough transform ellipse function I am thresholding for all values higher than .9 (and lower than .7) and these values on the eyelashes have the maximum pixel intensity values so they are taken into account.
Is there a way to remove the noise (eyelashes)? I am using skimage from sci-kit image for all of this. This is for a side view of the eye.
Related
I am using 2 different methods to render an image (as an opencv Matrix):
an implemented projection function that uses the camera intrinsics (focal length, principal point; distortion is disabled) - this function is used in other software packages and is supposed to work correctly (repository)
a 2D to 2D image warping (here, I'm determining the intersections of the corner-rays of my camera with my 2D image that should be warped into my camera frame); this backprojection of the corner points is using the same camera model as above
Now, I overlay these two images and what should basically happen is that a projected pen-tip (method 1.) should line up with a line that is drawn on the warped image (method 2.). However, this is not happening.
There is a tiny shift in both directions, depending on the orientation of the pen that is writing, and it is reduced when I am shifting the principal point of the camera. Now my question is, since I am not considering the principal point in the 2D-2D image warping, can this be the cause of the mismatch? Or is it generally impossible to align those two, since the image warping is a simplification of the projection process?
Grey Point: projected origin (should fall in line with the edges of the white area)
Blue Reticle: penTip that should "write" the Bordeaux-colored line
Grey Line: pen approximation
Red Edge: "x-axis" of white image part
Green Edge: "y-axis" of white image part
EDIT:
I also did the same projection with the origin of the coordinate system, and here, the mismatch grows, the further the origin moves out of the center of the image. (so delta[warp,project] gets larger on the image borders compare to the center)
I am working on a project for my thesis and I am building my own path tracer. Afterwards, I have to modify it in such a way to be able to implement the following paper:
https://mediatech.aalto.fi/publications/graphics/GPT/kettunen2015siggraph_paper.pdf
Of course I DO NOT want you to read the paper, but I link it anyway for those who are more curious. In brief, instead of rendering an image by just using the normal path tracing procedure, I have to calculate the gradients for each pixel, which means: if before we were shooting only rays through each pixel, we now shoot also rays for the neighbouring pixels, 4 in total, left, right, top, bottom. Let me explain in other words, I shoot one ray through a pixel and calculate its final colour as for normal path tracing, but, moreover, I shoot rays for its neighbour pixels, calculate the same final colour for those and, in order to calculate the gradients, I subtract their final colours from the main pixel. It means that for each pixel I will have 5 values in total:
colour of the pixel
gradient with right pixel = colour of the right pixel - colour of the pixel
gradient with left pixel = colour of the left pixel - colour of the pixel
gradient with top pixel = colour of the top pixel - colour of the pixel
gradient with bottom pixel = colour of the bottom pixel - colour of the pixel
The problem is that I don't know how to build the final image by both using the main colour and the gradients. What the paper says is that I have to use the screened Poisson reconstruction.
"Screened Poisson reconstruction combines the image and its
gradients using a parameter α that specifies the relative weights of
the sampled image and the gradients".
Everywhere I search for this Poisson reconstruction I see, of course, a lot of math but it comes hard to apply it to my project. Any idea? Thanks in advance!
Suppose the polygon is a hexagon.
To fill it gradiently, assume there are many smaller hexagons inside.
The smaller the hexagon, the pixel on that hexagon has a brighter color.
However, after I finish it, I find that there are spurious star edges between the center and the corners, like in the attached image (ignore the two discs inside).
How should I design an algorithm to make the filling more smoothly?
I have an image consisting of concentric circles. I'm trying to undistort the image so that the circles are equal spacing apart around the edges, as if the camera is parallel to the plane. (Some circles will appear closer to the next which is fine, I just want equal spacing all around the edges between the two adjacent circles)
I've tried to estimate a rigid transform by specifying points of the outer circle, but it distorts the inner circles too much, and I've tried a findhomography by specifying points of all the circles, and comparing with points of circles of where they should be.
From what I see the outer circles are 'squished' vertically, so need to be smushed up horizontally, but the inner circles are more circular. What can I do to undistort this image?
https://code.google.com/p/ipwithopencv/source/browse/trunk/ThinPlateSpline/ThinPlateSpline/?r=4
Using this implementation of Thin Plate Spline, I was able to input a set of points representing all the distorted circles, and another set of points which represent where they should be, to get the desired output. It isn't absolutely perfect, but it's more than good enough!
I have researched and the methods used to make a blooming effects are usually based on having a sharp and blurred image conjoined to give the glow effect. But I want to know how I can make gl_lines(or any line) have brightness. Since in my game I am randomly generating a simple 2D terrain, I wish to make the terrain line segments glow.
Use a fragment shader to calculate the distance from a fragment to the edge and color the fragment with the appropriate color value. You can use a simple control curve to control the radius and intensity anlong of the glow(like in photoshop). It can also be tuned to act like wireframe visualization. The idea is you don't really rasterize points to lines using a draw call, just shade each pixel based on its distance from the corresponding edge.
The difference from using a blur pass is that you will first get better performance, and second - per-pixel control over the glow, you can have non-uniform glow which you cannot get by using blur because it is not really aware of the actual line geometry, it just blindly works on pixels, whereas with edge distance detection you do use the actual geometry data as input without flatting it down to pixels. You can also have stuff like gradient glows, e.g. the glow color is different and changes with the radius.