Attach markers to a bezier line and then "Pull" it straight with Inkscape - inkscape

I have a bezier curve that I've used to trace the curve of a stream on a map. At points along the stream I have objects that mark the location of data. Is there anyway to attach the objects to the stream-curve and then straighten the curve?
I'd like to have this in order to create a river profile and also measure the distance between different pieces of data (some markers are elevation, and some are other data).
Before:
After:
... In my image, all of the markers should be evenly placed along the line, they just aren't perfectly placed in the version I made by hand.

Related

How compare two edges images in opencv (not matchShapes)

A little introduction on what I'm doing ...
For academic purposes I am creating an application in c++ using opencv for the detection of static objects in a scene.
The application is based on a combined approach of background subtraction and tracking, and the detection of events related to the abandonment of the objects works fine.
But at the moment I have a problem that I can't solve; I have to implement a finite state machine for detect the event of object removal, both before and after the entry of the object in the background.
To do this I was ordered by my superiors to use the edges of objects.
And now the problem.
After detecting a vehicle illegally parked along a road, I need to compare the edges of various images (the background captured at the time of the alarm, the current background, the current frame) to understand what the vehicle do (picks up the movement, remains parked or picks up the movement after being in the background).
I run these comparisons on the region of the scene in which there is the vehicle (vehicles typically have different size), I pull the edges using canny algorithm by obtaining a binarized CV_8UC1 cv::Mat.
At this point I have to compare them.
I tried to detect the contours with findContours and compare them with matchShapes, but it does not seem the right way, I'd compare each contour of the first image with every contour of the second, in addition typically the two images to campare have different number of contour (for example original background and current background, because the edges of the current background increased with the entry of the vehicle in the background).
I also tried to create a new image in which each pixel corresponds to the absolute difference of the other two, then I counted the white pixels of the difference image (wPx), and I used this number for comparison in this way: I set two thresholds (thr1 and thr2), and counted the pixels of the bounding rect of the vehicle (perim), if wPxthr2*perim images are different.
(I set percentages thresholds and I moltipy them with the perimeter of the bounding box to adapt the thresholds to the vehicle dimensions.)
This solution, however, seems to be very little robust.
Do you have something simple to suggest me?
Thank you very much in any case, more than once you StackOverflow users have helped me!
PS: THIS is an example of the images that I have to compare
The first is the background without the vehicle stationary, contains the edges of the street;
the second is the original background, the one captured when the stationary vehicle is detected;
the third is the current background (which in this case is equal to the original being the same frame, but then change);
the fourth is the current frame of the video;
You may want to take a look at this paper: A Novel SIFT-Like-Based Approach
for FIR-VS Images Registration. Aguilera et al. propose an Edge Oriented Histogram descriptor (EOH-SIFT).
This paper intends to register multispectral images, visible and infrared image, to each other. Because of the different characteristics of the images, the authors first extract edges/contours in both images, which results in images similiar to yours.
So, you can describe your image patches using this descriptor, illustrated in the following figure (taken from the above paper):
Subdivide your image patch into 4x4 zones
For each of the 16 subregions compose a histogram of contour's orientation (5 bins)
Put the histograms together into one descriptor vector of size 16x5=80 bins
Normalize the feature vector
So, every image you want to compare (in your case 4) is described by its 80-dimensional feature vector. You can compare them to each other by calculating and evaluating the Euclidean distance between them.
Note: Here a patch of size 80x80 or 100x100 (NxN) pixels is suggested. You may have to adjust the sizes to your image sizes.

aligning 2 face images based on their marker points

I am using open cv and C++. I have 2 face images which contain marker points on them. I have already found the coordinates of the marker points. Now I need to align those 2 face images based on those coordinates. The 2 images may not be necessarily of the same height, that is why I can't figure out how to start aligning them, what should be done etc.
In your case, you cannot apply the homography based alignment procedure. Why not? Because it does not fit in this use case. It was designed to align flat surfaces. Faces (3D objects) with markers at different places and depths are clearly no planar surface.
Instead, you can:
try to match the markers between images then interpolate the displacement field of the other pixels. Classical ways of doing it will include moving least squares interpolation or RBF's;
otherwise, a more "Face Processing" way of doing it would be to use the decomposition of faces images between a texture and a face model (like AAM does) and work using the decomposition of your faces in this setup.
Define "align".
Or rather, notice that there does not exist a unique warp of the face-side image that matches the overlapping parts of the frontal one - meaning that there are infinite such warps.
So you need to better specify what your goal is, and what extra information you have, in addition to the images and a few matched points on them. For example, is your camera setup calibrated? I.e do you know the focal lengths of the cameras and their relative position and poses?
Are you trying to build a texture map (e.g. a projective one) so you can plaster a "merged" face image on top of a 3d model that you already have? Then you may want to look into cylindrical or spherical maps, and build a cylindrical or spherical projection of your images from their calibrated poses.
Or are you trying to reconstruct the whole 3d shape of the head based on those 2 views? Obviously you can do this only over the small strip where the two images overlap, and they quality of the images you posted seems a little too poor for that.
Or...?

Drawing application with OpenGL/Openframeworks

I'm trying to program a brush stroke/drawing application using OpenGL within Openframeworks. For now I'm just trying to create squiggly lines that follow your mouse.
I've started by using ofpolyline but I have just managed to create a straight line that follows my mouse. I would really appreciate some pseudo code or something to point me in the right direction.
start. set (mouseX,mouseY);
end.set(mouseX,mouseY);
ofPolyline myline;
myline.addVertex(start.x,start.y);
myline.curveTo(end.x,end.y);
myline.bezierTo(mouseX,mouseY, mouseX,mouseY,mouseX, mouseY);
myline.addVertex(end.x,end.y);
myline.draw();
A Bezier curve with two vertices is always just a straight line segment. You need to add more vertices/control points to get non-degenerate (round) curves. So you could store the last mouse position somewhere, and add a new vertex when the mouse was moved by a certain amount (eg 20 pixels). Or add a vertex when the user clicks. However, if you always just call bezierTo(x,y,x,y,x,y), you will still only get straight lines. You need to offset the two control points from (x,y) to get round curves.

3D reconstruction using stereo vison - theory

I am currently reading into the topic of stereo vision, using the book of Hartley&Zimmerman alongside some papers, as I am trying to develop an algorithm capable of creating elevation maps from two images.
I am trying to come up with the basic steps for such an algorithm. This is what I think I have to do:
If I have two images I somehow have to find the fundamental matrix, F, in order to find the actual elevation values at all points from triangulation later on. If the cameras are calibrated this is straightforward if not it is slightly more complex (plenty of methods for this can be found in H&Z).
It is necessary to know F in order to obtain the epipolar lines. These are lines that are used in order to find image point x in the first image back in the second image.
Now comes the part were it gets a bit confusing for me:
Now I would start taking a image point x_i in the first picture and try to find the corresponding point x_i’ in the second picture, using some matching algorithm. Using triangulation it is now possible to compute the real world point X and from that it’s elevation. This process will be repeated for every pixel in the right image.
In the perfect world (no noise etc) triangulation will be done based on
x1=P1X
x2=P2X
In the real world it is necessary to find a best fit instead.
Doing this for all pixels will lead to the complete elevation map as desired, some pixels will however be impossible to match and therefore can't be triangulated.
What confuses me most is that I have the feeling that Hartley&Zimmerman skip the entire discussion on how to obtain your point correspondences (matching?) and that the papers I read in addition to the book talk a lot about disparity maps which aren’t mentioned in H&Z at all. However I think I understood correctly that the disparity is simply the difference x1_i- x2_i?
Is this approach correct, and if not where did I make mistakes?
Your approach is in general correct.
You can think of a stereo camera system as two points in space where their relative orientation is known. This are the optical centers. In front of each optical center, you have a coordinate system. These are the image planes. When you have found two corresponding pixels, you can then calculate a line for each pixel, wich goes throug the pixel and the respectively optical center. Where the two lines intersect, there is the object point in 3D. Because of the not perfect world, they will probably not intersect and one may use the point where the lines are closest to each other.
There exist several algorithms to detect which points correspond.
When using disparities, the two image planes need to be aligned such that the images are parallel and each row in image 1 corresponds to the same row in image 2. Then correspondences only need to be searched on a per row basis. Then it is also enough to know about the differences on x-axis of the single corresponding points. This is then the disparity.

Drawing a Smooth Line from Tablet Input

As the user drags their stylus across the tablet, you receive a series of coordinates. You want to approximate the pen's path with a smooth line, trailing only a few sample points behind it. How would you do this?
In other words, how would you render a nice smooth responsive line as a user draws it with their tablet? Simply connecting the dots with straight lines is not good enough. Real drawing programs do a much better job of curving the line, no matter how close or far the sample points are. Some even let you give them a number to indicate the amount of smoothing to be done, accounting for jittery pens and hands. Where can I learn to do this stuff?
I know this is an old question but I had the same problem and I came with 2 different solutions:
The first approach is use two resolutions: One , when the user is inserting the path points connecting them with straight lines. Two , when the user finish the stroke delete the lines and draw the spline over. That should be smoother than the straight lines.
The second approach it is to smooth the new points with a weighted mean of the sampled points. So each time you get a new point [x1,y1] instead of painting it directly, you take the previous points [x2,y2] and create a new intermediate point with the weighted mean of the two points. The pseudocode could be something like:
newPoint = [x1,y1];
oldPoint = [x2,y2];
point2Paint = [(x1*0.3) + (x2*0.7), (y1*0.3) + (y2*0.7)];
oldPoint= newPoint;
Being 0.7 and 0.3 the coefficients for the weighted mean ( You can change them to get your desired smoothing :)
I hope this would help
UPDATE Dec 13: Here it is an article explaining different drawing methods, there are good concepts that can be applied (edge smoothing, bezier curves, smooth joints)
http://perfectionkills.com/exploring-canvas-drawing-techniques
I never had to implement these (only for academic purposes), but you may want to take a look at wikipedia's interpolation article.
Extracted from the article:
interpolation is a method of constructing new data points within the range of a discrete set of known data points.
In engineering and science one often has a number of data points, as obtained by sampling or experimentation, and tries to construct a function which closely fits those data points. This is called curve fitting or regression analysis. Interpolation is a specific case of curve fitting, in which the function must go exactly through the data points.
Hope it helps.