Draw a point on a raster image from CLI? - drawing

HOw do I draw a point on an image with ImageMagick (or some other linux CLI tool)? I tried:
convert tension_00.png -pointsize 100 -fill black -draw 'point 10,10' tension_001.png
I doesn't give error, but I can't find a point on the processed image.

Oh I solved it with pygames (that is from python script):
import pygame
img = pygame.image.load('tension_00.png')
pygame.draw.circle(img, (1, 1, 1), (600,600), 50) # Surface, color, pos, radius, width=0
pygame.image.save(img, 'test.png')

Related

How can i use Viola Jones Algorithm to detect the Face as a region of interest and crop it till the rectangle box?

I want to detect the face in the video frame and remove the other elements such as background etc. and just want to focus on the facial region, for this i need to use viola jones algorithm, czn anyone give me a hint or suitable answer for this.
import cv2
import sys
imagep='6.jpg'#sys.argv[1]
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
i=cv2.imread(imagep)
gray=cv2.imread(imagep,cv2.COLOR_BGR2GRAY)
f=face_cascade.detectMultiScale(gray,scaleFactor=1.1,minNeighbors=5,minSize=(30,30),flags=cv2.CASCADE_SCALE_IMAGE)
print("Found {0} faces!".format(len(f)))
for(x,y,w,h) in f:
cv2.rectangle(i,(x,y),(x+w,y+h),(0,255,0),2)
cv2.imshow("Faces found",i)
cv2.waitKey(0)
Once you have the upper left and and bottom right coordinates of the rectangle in which the face is contained, You can just crop the original image on the basis of those coordinates. Let's suppose initial image is stored in the frame variable, follow the code
face = face_recognition.detectMultiScale(frame, scaleFactor = 1.8, minNeighbors = 5) #detects coordinates of face
resultant_image = frame[face[0][1] : (face[0][1] + face[0][3]),face[0][0] : (face[0][0] + face[0][2]), :] # gives you cropped image

How to perform ImageMagick gradient call with python wand

Trying to convert this ImageMagick command to Python Wand code but I don't see a means to create a gradient image with Wand.
convert -size 800x800 gradient:"rgba(0,0,0,0.12)-rgba(0,0,0,1)" gradient_overlay.png
convert gradient_overlay.png background.png -compose Overlay -composite -depth 8 background_gradient.png
Does anyone know how I could achieve this with wand?
You will need to allocate an instance of wand, set canvas size, then read the pseudo-image format.
from wand.image import Image
from wand.api import library
with Image() as canvas:
library.MagickSetSize(canvas.wand, 800, 800)
canvas.read(filename="gradient:rgba(0,0,0,0.12)-rgba(0,0,0,1)")
canvas.save(filename="gradient_overlay.png")
ImageMagick Equivalent For Use With Wand
I have not used Wand, but I can show you how to do it by reference to ImageMagick. You can create a new transparent image. Then use the fx operator to modify the alpha channel into a gradient.
The equivalent Wand references are:
Create a new transparent image (the default) - see http://docs.wand-py.org/en/0.4.1/guide/read.html?highlight=new%20image#open-empty-image
Use fx to convert the alpha channel to a gradient - see http://docs.wand-py.org/en/0.4.1/wand/image.html
convert -size 800x800 xc:transparent -channel a -fx "(j*(1-0.12)/(h-1)+0.12)" +channel alpha.png
Here is how I got the fx formula:
You want a 12% gray at the top (0.12) and 100% gray/white at the bottom (1.0). So we take the formula:
c = a*j + b
At j=0 the top, you need 0.12, so
0.12 = a*0 + b --> b = 0.12
At j=(h-1) the bottom, you want 1, so
1 = a*(h-1) + 0.12 --> a = (1-0.12)/(h-1)
So the equation is:
c = j*(1-0.12)/(h-1) + 0.12
h is the height of the image (800).
Here is how I solved it based fmw42 feedback
with Color('black') as blackColor:
with Image(width=800, height=800, background=blackColor) as black:
blackPath = existing[:existing.rfind('/') + 1] + 'black.png'
with Color('white') as whiteColor:
with Image(width=800, height=800, background=whiteColor) as alpha:
fxFilter = "(j*(1-0.12)/(h-1)+0.12)"
with alpha.fx(fxFilter) as filteredImage:
black.composite_channel('default_channels', filteredImage, 'copy_opacity', 0, 0)
black.save(filename=blackPath)

Extracting a laser line in an image (using OpenCV)

i have a picture from a laser line and i would like to extract that line out of the image.
As the laser line is red, i take the red channel of the image and then searching for the highest intensity in every row:
The problem now is, that there are also some points which doesnt belong to the laser line (if you zoom into the second picture, you can see these points).
Does anyone have an idea for the next steps (to remove the single points and also to extract the lines)?
That was another approach to detect the line:
First i blurred out that "black-white" line with a kernel, then i thinned(skeleton) that blurred line to a thin line, then i applied an OpenCV function to detect the line.. the result is in the below image:
NEW:
Now i have another harder situation.
I have to extract a green laser light.
The problem here is that the colour range of the laser line is wider and changing.
On some parts of the laser line the pixel just have high green component, while on other parts the pixel have high blue component as well.
Getting the highest value in every row will always output a value, instead of ignoring when the value isn't high enough. Consider using a threshold too, so that you can discard ones that aren't high enough.
However, that's not a very efficient way to do this at all. A much better and easier solution would be to use the OpenCV function inRange(); define a lower and upper bound for the red color in all three channels, and this will return a binary image with white pixels where the image intensity is within that BGR range.
This is in python but it does the job, should be easy to see how to use the function:
import cv2
import numpy as np
img = cv2.imread('image.png')
lowerb = np.array([0, 0, 120])
upperb = np.array([100, 100, 255])
red_line = cv2.inRange(img, lowerb, upperb)
cv2.imshow('red', red_line)
cv2.waitKey(0)
This produces the output:
This could be further processed by finding contours or other methods to turn the points into a nice curve.
I'm really sorry for the short answer without any code, but I suggest you take contours and process them.
I dont know exact what you need, so here are two approaches for you:
just collect as much as possible contours on single line (use centers and try find straight line with smallest mean)
as first way, but trying heuristically combine separated lines.... it's much harder, but this may give you almost full laser line from image.
--
Some example for yours picture:
import cv2
import numpy as np
import math
img = cv2.imread('image.png')
hsv = cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
# filtering red area of hue
redHueArea = 15
redRange = ((hsv[:, :, 0] + 360 + redHueArea) % 360)
hsv[np.where((2 * redHueArea) > redRange)] = [0, 0, 0]
# filtering by saturation
hsv[np.where(hsv[:, :, 1] < 95)] = [0, 0, 0]
# convert to rgb
rgb = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
# select only red grayscaled channel with low threshold
gray = cv2.cvtColor(rgb, cv2.COLOR_RGB2GRAY)
gray = cv2.threshold(gray, 15, 255, cv2.THRESH_BINARY)[1]
# contours processing
(_, contours, _) = cv2.findContours(gray.copy(), cv2.RETR_LIST, 1)
for c in contours:
area = cv2.contourArea(c)
if area < 8: continue
epsilon = 0.1 * cv2.arcLength(c, True) # tricky smoothing to a single line
approx = cv2.approxPolyDP(c, epsilon, True)
cv2.drawContours(img, [approx], -1, [255, 255, 255], -1)
cv2.imshow('result', img)
cv2.waitKey(0)
In your case it's work perfectly, but, as i already said, you will need to do much more work with contours.

Change origin of image coordinate system to bottom left instead of default top left

Is there a simple way of changing the origin of image co-ordinate system of OpenCV to bottom left? Using numpy for example? I am using OpenCv 2.4.12 and Python 2.7.
Related: Numpy flipped coordinate system, but this talks about just display. I want something which I can use consistently in my algorithm.
Update:
def imread(*args, **kwargs):
img = plt.imread(*args, **kwargs)
img = np.flipud(img)
return img
#read reference image using cv2.imread
imref=cv2.imread('D:\\users\\gayathri\\all\\new\\CoilA\\Resized_Results\\coilA_1.png',-1)
cv2.circle(imref, (0,0),30,(0,0,255),2,8,0)
cv2.imshow('imref',imref)
#read the same image using imread function
im=imread('D:\\users\\gayathri\\all\\new\\CoilA\\Resized_Results\\coilA_1.png',-1)
img= im.copy()
cv2.circle(img, (0,0),30,(0,0,255),2,8,0)
cv2.imshow('img',img)
Image read using cv2.imread:
Image flipped using imread function:
As seen the circle is drawn at the origin on upper left corner in both original and flipped image. But the image looks flipped which I do not desire.
Reverse the height (or column) pixels will get the result below.
import numpy as np
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
img = cv2.imread('./imagesStackoverflow/flip_body.png') # read as color image
flip = img[::-1,:,:] # revise height in (height, width, channel)
plt.imshow(img[:,:,::-1]), plt.title('original'), plt.show()
plt.imshow(flip[:,:,::-1]), plt.title('flip vertical'), plt.show()
plt.imshow(img[:,:,::-1]), plt.title('original with inverted y-axis'), plt.gca().invert_yaxis(), plt.show()
plt.imshow(flip[:,:,::-1]), plt.title('flip vertical with inverted y-axis'), plt.gca().invert_yaxis(), plt.show()
Output images:
Above included the one you intended to do?

OpenCV - Extract letters from string using python

I have an image
from where I want to extract each and every character individually.
As i want something like THIS OUTPUT and so on.
What would be the appropriate approach to do this using OpenCV and python?
A short addition to Amitay's awesome answer. You should negate the image using
cv2.THRESH_BINARY_INV
to capture black letters on white paper.
Another idea could be the MSER blob detector like that:
img = cv2.imread('path to image')
(h, w) = img.shape[:2]
image_size = h*w
mser = cv2.MSER_create()
mser.setMaxArea(image_size/2)
mser.setMinArea(10)
gray = cv2.cvtColor(filtered, cv2.COLOR_BGR2GRAY) #Converting to GrayScale
_, bw = cv2.threshold(gray, 0.0, 255.0, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
regions, rects = mser.detectRegions(bw)
# With the rects you can e.g. crop the letters
for (x, y, w, h) in rects:
cv2.rectangle(img, (x, y), (x+w, y+h), color=(255, 0, 255), thickness=1)
This also leads to a full letter recognition.
You can do the following ( opencv 3.0 and aboove)
Run Otsu thresholding on the image (http://docs.opencv.org/3.2.0/d7/d4d/tutorial_py_thresholding.html)
Run connected component labeling with stats on the threshold images.(How to use openCV's connected components with stats in python?)
For each connected component take the bounding box using the stat you got from step 2 which has for each one of the comoneonts the follwing information (cv2.CC_STAT_LEFT cv2.CC_STAT_TOP cv2.CC_STAT_WIDTH cv2.CC_STAT_HEIGHT)
Using the bounding box crop the component from the original image.