Is there a way to crop an image with Raphael.js? - raphael

I am trying to fit part of in image into a raphael object.
Scaling the image works perfectly, but when I try to translate it, it ends up returning the wrong part of the image.
I am scalling the image using "S1.5,1.5,0,0", that is, I am not scalling it around the middle point, so scalling it works beautifully.
But, as I try to offset the image, the resulting image fragment is offset.
Maybe there's another way to do it in Raphael.
What I am trying to accomplish is use a fragment of an image as an image object in Raphael and I need to copy a rectangle from an external image into it.
Something like:
copy original image fragment (x0 = 100, y0 = 120, width = 300, height = 250) to the image object, which has dimensions (width = 150 and 125).
I have been looking for an answer for some time, but nothing that really helps.
Edit:
The fiddle is
/w9XSf/12/
In the example above, I am grabbing a 100 x 60px area from the original image (which is 612 x 325px), and trying to display it on the output image, which is 500 x 300px.
The scale works, but the area it is grabbing is not the one I need.
It does work, if I grab from 0, 0.
But, as I move from the top left corner of the originsl image, the actual area it gives me is farther away from what I actually need :(.
Any ideas? (I have already tried swapping the order of the T and the S in the transform string).
Thanks.

Using Raphael, the following code creates a container, to be used to display an image, duly translated and scaled. A live version of the solution is also available at http://jsfiddle.net/s6DHf/. This is a forked version of the actual problem.
var outputW = 525,
outputH = 300;
sourceX = 100,
sourceY = 100,
scaleX = 1.5,
scaleY = 1.5,
paper = new Raphael("image", outputW, outputH),
bgImg = paper.image("http://cdn3.whatculture.com/wp-content/uploads/2013/04/MAN-OF-STEEL-e1365755036183.jpg", 0, 0, 350, 200)
.transform("t" + sourceX + "," + sourceY + "s" + scaleX +","+ scaleY + ",0,0");
Check the use of "s" and "t" (in lowercase), which denotes relative scaling and relative translation, respectively. The problem was due to the use of "S" and "T" (in uppercase), which is all about absolute scaling and translation, respectively.
Raphael reference: http://raphaeljs.com/reference.html#Element.transform
Hope this helps.

Related

How to use IDirectManipulationViewport::SetViewportRect?

Here is the slightly modified Direct Manipulation Sample app from windows 8 classic samples. I removed all elements except the single Viewport and it's Content (checkered texture). When I set IDirectManipulationVewport::SetViewportRect() with offset to the origin (e.g. SetViewportRect(100, 100, client_rect.right, client_rect.bottom) I expect the content to be aligned at 100, 100. However the content is always aligned at the window (parent IDirectCompositionVisual) origin.
I also tried IDirectManipulationViewport::SetViewportTransform() with translation matrix, but the result is the same.
What is the correct way of positioning the viewport in the visual not at the origin? Is it possible? Or I should create another child IDirectCompositionVisual with viewport, position it with SetOffsetX/Y and add content to it?
Here is a link to documentation
UPD after Rita Han's answer:
If you make just the following modifications to the sample:
//modify viewport rectangle in CAppWindow::_SizeDependentChanges()
_viewportOuterRect.left = 100;
_viewportOuterRect.top = 100;
//align content at the center in HRESULT CAppWindow::_InitializeManagerAndViewport()
primaryContentOuter->SetHorizontalAlignment(DIRECTMANIPULATION_HORIZONTALALIGNMENT_CENTER);
primaryContentOuter->SetVerticalAlignment(DIRECTMANIPULATION_VERTICALALIGNMENT_CENTER);
//change zoom boundaries to enable zoom out
hr = primaryContentOuter->SetZoomBoundaries(0.1f, 5.0f);
If you zoom out, you will see the following:
red - actual incorrect viewport rectangle (_viewportOuterRect.left and top coordinates are ignored, however the size is changed).
green - expected viewport rectangle.
blue - expected content aligned position.
This sample works for me you can have a try. The related code I modified for your case:
::GetClientRect(_hWnd, &_viewportOuterRect);
_viewportOuterRect.left = 100;
_viewportOuterRect.top = 100;
if(SUCCEEDED(hr))
{
hr = _viewportOuter->SetViewportRect(&_viewportOuterRect);
}
The output:

Swift Imageview Circular

I am trying to get my profile picture to display as a circular view using swift 3. This is my code:
self.view.layoutIfNeeded()
self.profileImageView.image = image
self.profileImageView.layer.cornerRadius = self.profileImageView.frame.width/2.0
self.profileImageView.clipsToBounds = true
self.profileImageView.layer.masksToBounds = true
It works well on square images. But once the image is not square this doesn't display the image as circular. What do I need to do in order to get it to be display the imageview as a circle? Or is this feature only limited to square images?
Your code is making the corner radius half the width. This works fine when height == width (so radius also == height/2), but otherwise it won't work.
To fix this, add constraints to make your profileImageView square, then set the profileImageView.contentMode = .aspectFill.
Add self.view.layoutIfNeeded() line before you are set corner radius.
self.view.layoutIfNeeded()
self.profileImageView.layer.cornerRadius = self.profileImageView.frame.width/2.0
self.profileImageView.clipsToBounds = true

Extracting a laser line in an image (using OpenCV)

i have a picture from a laser line and i would like to extract that line out of the image.
As the laser line is red, i take the red channel of the image and then searching for the highest intensity in every row:
The problem now is, that there are also some points which doesnt belong to the laser line (if you zoom into the second picture, you can see these points).
Does anyone have an idea for the next steps (to remove the single points and also to extract the lines)?
That was another approach to detect the line:
First i blurred out that "black-white" line with a kernel, then i thinned(skeleton) that blurred line to a thin line, then i applied an OpenCV function to detect the line.. the result is in the below image:
NEW:
Now i have another harder situation.
I have to extract a green laser light.
The problem here is that the colour range of the laser line is wider and changing.
On some parts of the laser line the pixel just have high green component, while on other parts the pixel have high blue component as well.
Getting the highest value in every row will always output a value, instead of ignoring when the value isn't high enough. Consider using a threshold too, so that you can discard ones that aren't high enough.
However, that's not a very efficient way to do this at all. A much better and easier solution would be to use the OpenCV function inRange(); define a lower and upper bound for the red color in all three channels, and this will return a binary image with white pixels where the image intensity is within that BGR range.
This is in python but it does the job, should be easy to see how to use the function:
import cv2
import numpy as np
img = cv2.imread('image.png')
lowerb = np.array([0, 0, 120])
upperb = np.array([100, 100, 255])
red_line = cv2.inRange(img, lowerb, upperb)
cv2.imshow('red', red_line)
cv2.waitKey(0)
This produces the output:
This could be further processed by finding contours or other methods to turn the points into a nice curve.
I'm really sorry for the short answer without any code, but I suggest you take contours and process them.
I dont know exact what you need, so here are two approaches for you:
just collect as much as possible contours on single line (use centers and try find straight line with smallest mean)
as first way, but trying heuristically combine separated lines.... it's much harder, but this may give you almost full laser line from image.
--
Some example for yours picture:
import cv2
import numpy as np
import math
img = cv2.imread('image.png')
hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
# filtering red area of hue
redHueArea = 15
redRange = ((hsv[:, :, 0] + 360 + redHueArea) % 360)
hsv[np.where((2 * redHueArea) > redRange)] = [0, 0, 0]
# filtering by saturation
hsv[np.where(hsv[:, :, 1] < 95)] = [0, 0, 0]
# convert to rgb
rgb = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
# select only red grayscaled channel with low threshold
gray = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
gray = cv2.threshold(gray, 15, 255, cv2.THRESH_BINARY)[1]
# contours processing
(_, contours, _) = cv2.findContours(gray.copy(), cv2.RETR_LIST, 1)
for c in contours:
area = cv2.contourArea(c)
if area < 8: continue
epsilon = 0.1 * cv2.arcLength(c, True) # tricky smoothing to a single line
approx = cv2.approxPolyDP(c, epsilon, True)
cv2.drawContours(img, [approx], -1, [255, 255, 255], -1)
cv2.imshow('result', img)
cv2.waitKey(0)
In your case it's work perfectly, but, as i already said, you will need to do much more work with contours.

How to remove black part from the image?

I have stitched two images together using OpenCV functions and C++. Now I am facing a problem that the final image contains a large black part.
The final image should be a rectangle containing the effective part.
My image is the following:
How can I remove the black section?
mevatron's answer is one way where amount of black region is minimised while retaining full image.
Another option is removing complete black region where you also loose some part of image, but result will be a neat looking rectangular image. Below is the Python code.
Here, you find three main corners of the image as below:
I have marked those values. (1,x2), (x1,1), (x3,y3). It is based on the assumption that your image starts from (1,1).
Code :
First steps are same as mevatron's. Blur the image to remove noise, threshold the image, then find contours.
import cv2
import numpy as np
img = cv2.imread('office.jpg')
img = cv2.resize(img,(800,400))
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = cv2.medianBlur(gray,3)
ret,thresh = cv2.threshold(gray,1,255,0)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
Now find the biggest contour which is your image. It is to avoid noise in case if any (Most probably there won't be any). Or you can use mevatron's method.
max_area = -1
best_cnt = None
for cnt in contours:
area = cv2.contourArea(cnt)
if area > max_area:
max_area = area
best_cnt = cnt
Now approximate the contour to remove unnecessary points in contour values found, but it preserve all corner values.
approx = cv2.approxPolyDP(best_cnt,0.01*cv2.arcLength(best_cnt,True),True)
Now we find the corners.
First, we find (x3,y3). It is farthest point. So x3*y3 will be very large. So we find products of all pair of points and select the pair with maximum product.
far = approx[np.product(approx,2).argmax()][0]
Next (1,x2). It is the point where first element is one,then second element is maximum.
ymax = approx[approx[:,:,0]==1].max()
Next (x1,1). It is the point where second element is 1, then first element is maximum.
xmax = approx[approx[:,:,1]==1].max()
Now we find the minimum values in (far.x,xmax) and (far.y, ymax)
x = min(far[0],xmax)
y = min(far[1],ymax)
If you draw a rectangle with (1,1) and (x,y), you get result as below:
So you crop the image to correct rectangular area.
img2 = img[:y,:x].copy()
Below is the result:
See, the problem is that you lose some parts of the stitched image.
You can do this with threshold, findContours, and boundingRect.
So, here is a quick script doing this with the python interface.
stitched = cv2.imread('stitched.jpg', 0)
(_, mask) = cv2.threshold(stitched, 1.0, 255.0, cv2.THRESH_BINARY);
# findContours destroys input
temp = mask.copy()
(contours, _) = cv2.findContours(temp, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# sort contours by largest first (if there are more than one)
contours = sorted(contours, key=lambda contour:len(contour), reverse=True)
roi = cv2.boundingRect(contours[0])
# use the roi to select into the original 'stitched' image
stitched[roi[1]:roi[3], roi[0]:roi[2]]
Ends up looking like this:
NOTE : Sorting may not be necessary with raw imagery, but using the compressed image caused some compression artifacts to show up when using a low threshold, so that is why I post-processed with sorting.
Hope that helps!
You can use active contours (balloons/snakes) for selecting the black region accurately. A demonstration can be found here. Active contours are available in OpenCV, check cvSnakeImage.

SetViewBox moving the paper

I am using the setViewBox() function in Raphael 2. The width and height is multiplied by a value like (1.2, 1.3 ...). This changes the magnification/ zooming properly but the x and y which I have given as 0,0 makes the paper display its contents after some offset. If i modify the x and y to some positive value after the rendering( using firebug!!) then the top left of the paper moves back and above to its right position. I want to know how will the value be calculated. I have no idea about how the x,y affect the viewbox. If anybody can give me any pointers for this it will be a real help.
I have tried giving the difference between the width/ height divided by 2. Also I must mention that I am not rendering an image but various raphael shapes e.g. rects, paths text etc. in my paper.
Looking forward to some help!
Kavita
this is an example showing how to calculate the setViewBox values, I included jquery (to get my SVG cocntainer X and Y : $("#"+map_name).offset().left and $("#"+map_name).offset().top) and after that I calculated how much zoom I need :
var original_width = 777;
var original_height = 667;
var zoom_width = map_width*100/original_width/100;
var zoom_height = map_height*100/original_height/100;
if(zoom_width<=zoom_height)
zoom = zoom_width;
else
zoom = zoom_height;
rsr.setViewBox($("#"+map_name).offset().left, $("#"+map_name).offset().top, (map_width/zoom), (map_height/zoom));
did you put the center of your scaling to 0,0 like:
element.scale(1.2,1.2,0,0);
this can scale your element without moving the coordinates of the top left corner.