Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have been assigned to project of making custom photo/image editor for one company and when looking at the list of desired functionality I've found out I'm at loss how some of them work. Can anyone please explain how some of it is done/what functions are they?
Can you please explain it for idiot like me?(practically no prior knowledge of photo editing)
(Also if you know about libraries that has those functions I'd be very grateful if you could recommend some, but not necessarily)
Here's the list:
White balance editation of RAW (As shot/Auto/Presets/Manual-numerical/Grey card)
Exposure
Recovery
Fill Light
Blacks ("making black pixels more black")
Clarity/Microcontrast
Saturation
Vibrance
Curves
Levels
Sharpening
Color Balance
Avoiding color clipping
Avoiding highlight clipping
editing brightness but not colors (not sure about how to call this one - in LAB lightness channel)
Selective color corrections
Freemasking
Chromatic abberation
P.S.:If there is a stackexchange page more suitable for this question please tell me where to ask!
White blance: you can find more adequate explanations here and
this question treat it nicely (on wikipedia it says that white
and color balance are similar)
Exposure: OpenCV info; It is how many light it is entering through the diaphragm of the device: Wiki
Recovery: Wiki says this, but it could be something like deblurring
Fill light: I think it is something like shadow remover
Blacks: I do not know what it may be, but it makes me think of inverse of exposure (multiply by negative values)
Clarity/Microcontrast: This is a good explanation
Saturation: is about color saturation
Vibrance: A nice explanation
Curves: I am not sure what it is about: detecting or correcting?
Levels: I do not know what it is about
Sharpening: See this It is about accentuating the edges in an image
Color Balance: see white balance
Avoiding color clipping: I think it is base on this idea
Avoiding highlight clipping: It should be the same as the above, but linked of luminance, instead of color. See the clipping theory.
editing brightness but not colors: I think it is about gamma correction
Selective color corrections: It should be like white balance, but on RGB levels; see this
Freemasking: is based on replacing the "green panel" with what you want, as in television
Chromatic abberation: is because of the lens of the device: see this
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
So my problem, as I said in the title is that I have an image that is in an perspective view and I to transform it into the orthographic view.
But as far as I can understand this example :
the distance from the camera to the NearClip plane and to the FarClip plane is required.
I was wondering if I'm going completly wrong and if there is a way to accomplish that without knowing those distances?
If yes, I Suppose it's something easy such as a matrix multiplication, but after few hours of research, I turn to you searchin any help that can come...
Thanks a lot!
Best regards!
EDIT : I will explain the context, maybe it can helps.
I have a Fish-eye camera that took a panoramic picture (like below, for example)
And my final goal is to create 6 cubics (6 image that corresponde to the up, the down, right, left, front and back of a cube if you're in the cube). So I tried to use the equirectangular projection to create a picture that contains the 6 cubics.
But the problem is that the fisheye take perspective view. So my 6 pictures are perspectives. And I want them to be ortho... :'(
No this is not possible without making several assumptions. Distances or object sizes..
Of course you don't have any informaton of what is behind your objects from your perspective. This information is not available even if you had the distances.
If that was possible there would be no need for 3d-imaging or telecentric lenses.
Of course you can also assume that your objects are spheres. Then you know what to add in your reconstruction but in general this is not viable.
This may be an old question, but the existing answer of "not possible" is not correct for pictures that are less extreme than the example. Photoshop has a Lens Correction tool, as does the free program Gimp. A tutorial for the Photoshop tool is at https://helpx.adobe.com/photoshop/using/correcting-image-distortion-noise.html#correct_lens_distortion_and_adjust_perspective showing it can be done through Choose Filter > Lens Correction. And though you would need to know specific measurements from the camera or scene to perfectly correct the image, you can get pretty close and use assumptions that some objects will have straight edges or certain lines will be parallel.
Gimp's tool is under Filters -> Distorts -> Lens Distortions, and some examples can be found at http://www.texturemate.com/content/how-easily-remove-lens-distortion-photos-using-gimp and there's a StackExchange answer for it at https://gamedev.stackexchange.com/questions/129415/converting-real-life-perspective-photos-into-orthographic-view-for-texture-creat
Neither of these may be extensive enough to un-distort an image from fisheye lenses, but these options are available to anyone who found this page and hopes to adjust an image with more common distortions.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to detect the location of a fingertip from an image. I've been able to crop out a region in the image where it must have a fingertip, and extract the edges using Canny Edge Detector. However I'm stuck. Since my project description says I can't use the skin color for detection, I cannot find the exact contour of the finger, and will have to try to separate the fingertip with edges alone. Right now I'm thinking since the finger has a curved arch shape/letter U shape, maybe that could be used for detection. But since it has to be rotation/scale invariant, most algorithms I found so far are not up to it. Does anyone have an idea of how to do this? Thanks for anyone that responds!
This is the result I have now. I want to put a bounding box around the index fingertip, or the highest fingertip, whichever is the easiest.
You may view the tip of U as a corner, and try corner detection method such as the Foerstner Algorithm that will position of a corner with sub-pixel accuracy, and Haris corner detector which has implementation included in the feature2D class in opencv.
There is a very clear and straighforward lecture on Haris corner detector that I would like to share with you.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Is it possible to create something like this in inkscape? I have been searching and trying for quite some time, but so far no results...
So what I am looking for is a gradient which follows the direction of the path it is applied to.
Gradient along a path http://coreldraw.com/cfs-filesystemfile.ashx/__key/CommunityServer.Components.PostAttachments/00.00.02.07.23/GradientBlend2.jpg
I am not aware of a solution that perfectly satisfies your needs however here is one approach to get close:
Draw a line, open the Fill and Stroke dialog and set the width of the stroke (Stroke style tab) to 50. Transform the stroke to a path (Path > Stroke to Path).
Fill the path with a linear gradient (Fill and Stroke dialog > Fill > Linear gradient) and choose any colours you like.
Select the node tool (F2) and a line will appear stretching the area of your shape (marked with an exclamation mark on the picture below). Adjusting this line may help you to fine tune the gradient (however, strictly speaking it will not follow the path but a rectangle defined by the start and end marker of the marked line)
This should work for simple shapes that do not reverse. Let me know if this is good enough or if you need more detailed instructions.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm working on a graphics project trying to create an impossible cube in 3D. An impossible cube looks like that:
The trick behind this is two of the edges which are 'cut' and a picture taken from a specific angle to give the illusion of the impossibility.
Well I'm trying to make this but instead of a static image, I want to be able to animate it (rotate around) maintaining the impossible properties.
I have managed to make a cube in blender as you can see in the screenshot below:
I would like to hear your suggestions as to how I can achieve the desired effect. An idea would be to make transparent the portion of the edge that has an edge(or more) behind it, so that every time the camera angle changes, the transparent patch moves along.
It doesn't have to be done in Blender exclusively so any solutions in OpenGL etc are welcome.
To give you an idea of what the end result should be, this is a link to such an illustration:
3D Impossible Cube Illusion Animation
It's impossible (heh). Try to imagine rotating the cube so that the impossibly-in-front bit moves to the left. As soon as it would "cross" the current leftmost edge, the two properties of "it's in front" and "it's in the back" will not be possible to fulfill simultaneously.
If you have edge culling enabled, but clipping (depth-testing) disabled, and draw primitives in the right order, you should get the Escher cube without any need for cuts. This should be relatively easy to animate.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am trying to build an application which the user can use to draw clothoids with the mouse, i.e. to set the start point and the end point of the spiral and then by dragging these points with the mouse he could also set the shape of the clothoid by modifying the start and end curvature. Probably for changing the curvature I will need to use the tangents though I am not sure of that.
How can one implement that in Qt? Do you know of any example codes that I could run in Qt?
The best approach would probably be to use the Graphics View Framework.
There are a few examples that should help you getting started in the Graphics View Examples page. The Diagram scene one looks like a good starting point for what you want to achieve.
I realize that this is an old question, but for interested parties there is a good discussion of theory and pseudocode for Euler spirals (clothoids) in the paper "Euler Spiral for Shape Completion" by Kimia, Frankel, and Popescu. Sample C++ code can be found online at Brown University's website.
Euler Spiral for Shape Completion
Page with download link for C++ code for method of Kimia, Frankel, and Popescu
Papers by Levien and others suggest methods to improved upon the "biarc" calculation of the paper by Kimia, et al. Levien's paper includes an in-depth history.
The Euler spiral: a mathematical history by Raph Levien
You only need four parameters to draw the spiral: two end points, and the angles of tangents at those end points. (You don't need to define curvature.) The code outputs the intermediate points between the two end points at distance increments of your choice. You simply need to plot and connect those intermediate points.
Once you implement the code, you may need to tweak some of the parameters such as the minimum curvature. You'll likely see a few parameters for which the code "blows up".