PIL - how to insert an index, or subscript, into text? - python-2.7

Like this:
Calculating coordinates looks not so good, maybe there is a better way?
This code works fine (), but it's complicated always calculate where to place index for each letter.
image = Image.new('I', (300, 100), "white").convert('RGBA')
font = ImageFont.truetype(font=r"C:\Windows\Fonts\Arial.ttf", size=39)
draw = ImageDraw.Draw(image, 'RGBA')
draw.text((10, 10), "P", fill="black", font=font, align="center")
font = ImageFont.truetype(font=r"C:\Windows\Fonts\Arial.ttf", size=20)
draw.text((25, 35), "2", fill="black", font=font, align="center")
image.save(output_folder + 'test.png')

One possibility for you might be to use ImageMagick which understands Pango Markup Language - which looks kind of like HTML.
So, at the command-line you could run this:
convert -background white pango:'<span size="49152">Formula: <b>2P<sub><small><small>2</small></small></sub>O<sub><small><small>5</small></small></sub></b></span>' formula.png
which produces this PNG file:
Change to -background none to write on a piece of transparent canvas if you want to preserve whatever is underneath the text in your original image.
You can also put all the markup in a separate text file, called say "pango.txt" like this:
<span size="49152">Formula: <b>2P<sub><small><small>2</small></small></sub>O<sub><small><small>5</small></small></sub></b></span>
and pass that into ImageMagick like this:
convert pango:#pango.txt result.png
You could shell out and do this using:
subprocess.call()
Then you can easily load the resultant image and composite/paste it in where you want it - that would take about 3 lines of Python that you could put in a function.
Here is a further example of an image generated with Pango by Anthony Thyssen so you can see some of the possibilities:
There is loads of further information on Pango by Anthony here.
Note that there are also Python bindings for ImageMagick but I am not very familiar with them, but that may be cleaner than shelling out.
Keywords: Pango, PIL, Pillow, Python, markup, subscript, superscript, formula, chemical formulae, ImageMagick, image, image processing, SGML, HTML.

You can also do this sort of thing using Mathtext in Matplotlib:
#!/usr/bin/env python3
import matplotlib.pyplot as plt
plt.axes([0.025, 0.025, 0.95, 0.95])
# Some formula with superscripts, subscripts, square roots, fractions and integrals
eq = r"$ 2P_2 O_5 + H^{2j}$"
size = 50
x,y = 0.5, 0.5
alpha = 1
params = {'mathtext.default': 'regular' }
plt.rcParams.update(params)
plt.text(x, y, eq, ha='center', va='center', color="#11557c", alpha=alpha,
transform=plt.gca().transAxes, fontsize=size, clip_on=True)
# Suppress ticks
plt.xticks(())
plt.yticks(())
# Save on transparent background
plt.savefig('result.png', transparent=True)
You can also save the output in a memory buffer (without going to disk) and then use that in your PIL-based image processing.
Note that I have explicitly named and assigned all the parameters (x, y, size and alpha) so you can play with them and that makes the code look longer and more complicated than it actually is.
Keywords: Python, PIL, Pillow, maths, mathematical symbols, formula with superscripts, subscripts, square roots, fractions and integrals.

Related

I have a list of 1000 words, I need to make sure that each word is written on a separate picture. how can i automate this?

I attached photo example:
https://i.stack.imgur.com/f3hvY.jpg
I need to do the same with 1000 words
You can automate with python using the PILL library.
from PIL import Image, ImageDraw ,ImageFont
words = ['Hello World' , 'Good Morning' , 'Coffee']
for number,word in enumerate(words):
img = Image.new('RGB', (1000, 1000), color =(0,0,0)) #(0,0,0) means black
font = ImageFont.truetype("arial.ttf",50)
d = ImageDraw.Draw(img)
d.text((400,400), word , font = font,fill=(255,255,255)) #(255,255,255) is white
img.save(f'pil_text{number}.jpg')
In the above code, you can change the background color, font size, font type, name of file. You can add your word list in 'word' manually or load a file in python. The code will slightly change then depending whats the file you are loading.

Python Matplotlib creating a custom colour scale

I have created a map of precipitation levels in a region based on precipitation data from NetCDF files. I would like to add a custom scale such that if precipitation is less than 800mm it would be one colour, 800-1000mm another, etc. Similar to the map found here: http://www.metmalawi.com/climate/climate.php
At the moment I am using a gradient scale but it isn't showing the detail I need. This is the code for the plot at the moment (where 'Average' is my data that I have already formatted).
#load color palette
colourA = mpl_cm.get_cmap('BuPu')
#plot map with physical features
ax = plt.axes(projection=cartopy.crs.PlateCarree())
ax.add_feature(cartopy.feature.COASTLINE)
ax.add_feature(cartopy.feature.BORDERS)
ax.add_feature(cartopy.feature.LAKES, alpha=0.5)
ax.add_feature(cartopy.feature.RIVERS)
#set map boundary
ax.set_extent([32.5, 36., -9, -17])
#set axis tick marks
ax.set_xticks([33, 34, 35])
ax.set_yticks([-10, -12, -14, -16])
lon_formatter = LongitudeFormatter(zero_direction_label=True)
lat_formatter = LatitudeFormatter()
ax.xaxis.set_major_formatter(lon_formatter)
ax.yaxis.set_major_formatter(lat_formatter)
#plot data and set colour range
plot = iplt.contourf(Average, cmap=colourA, levels=np.arange(0,15500,500), extend='both')
#add colour bar index and a label
plt.colorbar(plot, label='mm per year')
#give map a title
plt.title('Pr 1990-2008 - Average_ERAINT ', fontsize=10)
#save the image of the graph and include full legend
plt.savefig('ERAINT_Average_Pr_MAP_Annual', bbox_inches='tight')
plt.show()
Anyone know how I can do this?
Thank you!
This is a matplotlib question disguised as an Iris question as the issue has appeared via Iris plotting routines, but to answer this we need only a couple of matplotlib commands. As such, I'm basing this answer on this matplotlib gallery example. These are levels (containing values for the upper bound of each contour) and colors (specifying the colours to shade each contour). It's best if there are the same number of levels and colours.
To demonstrate this, I put the following example together. Given that there's no sample data provided, I made my own trigonometric data. The levels are based on the trigonometric data values, so do not reflect the levels required in the question, but could be changed to the original levels. The colours used are the hex values of the levels specified by image in the link in the question.
The code:
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(-25, 25)
y = np.arange(-20, 20)
x2d, y2d = np.meshgrid(x, y)
vals = (3 * np.cos(x2d)) + (2 * np.sin(y2d))
colours = ['#bf8046', '#df9f24', '#e0de30', '#c1de2d', '#1ebf82',
'#23de27', '#1dbe20', '#11807f', '#24607f', '#22427e']
levels = range(-5, 6)
plt.contourf(vals, levels=levels, colors=colours)
plt.colorbar()
plt.show()
The produced image:
Colours could also be selected from a colormap (one way of doing this is shown in this StackOverflow answer). There are also other ways, including in the matplotlib gallery example linked above. Given, though, that the sample map linked in the question had specific colours I chose to use those colours directly.

Find the width of an ink stroke in an image using OpenCV & C++

I have the following sample of handwriting taken with three different writing instruments:
Looking at the writing, I can tell that there is a distinct difference between the first two and the last one. My goal is to determine an approximation of the stroke thickness for each letter, allowing me to group them based on being thin or thick.
So far, I have tried looking into stroke width transform, but I have struggled to translate it to my example.
I am able to preprocess the image such that I am just left with just the contours of the test in question. For example, here is thick from the last line:
I suggest detecting contours with cv::findContours as you are doing and then compare bounding rectangle area and contour area. The thicker writing the greater coefficent (contourArea/boundingRectArea) will be.
This approach will help you. This will calcuate the stroke width.
from skimage.feature import peak_local_max
from skimage import img_as_float
def adaptive_thresholding(image):
output_image = cv2.adaptiveThreshold(image,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,21,2)
return output_image
def stroke_width(image):
dist = cv2.distanceTransform(cv2.subtract(255,image), cv2.DIST_L2, 5)
im = img_as_float(dist)
coordinates = peak_local_max(im, min_distance=15)
pixel_strength = []
for element in coordinates:
x = element[0]
y = element[1]
pixel_strength.append(np.asarray(dist)[x,y])
mean_pixel_strength = np.asarray(pixel_strength).mean()
return mean_pixel_strength
image = cv2.imread('Small3.JPG', 0)
process_image = adaptive_thresholding(image)
stroke_width(process_image)
A python implementation for this might go something like this, using Stroke Width Transform implementation of SWTloc.
Full Disclosure: I am the author of this library.
EDIT : Post v2.0.0
Transforming The Image
import swtloc as swt
imgpath = 'images/path_to_image.jpeg'
swtl = swt.SWTLocalizer(image_paths=imgpath)
swtImgObj = swtl.swtimages[0]
# Perform SWT Transformation with numba engine
swt_mat = swtImgObj.transformImage(auto_canny_sigma=1.0, gaussian_blurr=False,
minimum_stroke_width=3, maximum_stroke_width=50,
maximum_angle_deviation=np.pi/3)
Localize Letters
localized_letters = swtImgObj.localizeLetters()
Plot Histogram of Each Letters Strokes Widths
import seaborn as sns
import matplotlib.pyplot as plt
all_sws = []
for letter_label, letter in localized_letters.items():
all_sws.append(letter.stroke_widths_mean)
sns.displot(all_sws, bins=31)
From the distribution plot, it can be inferred that there might be three fontsize of the text available in the image - [3, 15, 27]

Rotate and scale a complete .SVG document using Python

I have a SVG drawing (from a Building Map) and I want to rotate the complete document 90 degrees clockwise. Now, the drawing orientation is portrait, the idea is to have a landscape orientation.
Besides of this, I would like to scale the complete document (so including all elements).
For now, I could not find the possibilties for doing this on the web. So that is why I am asking overhere. My questions are:
Is it possible?
If yes, in what way can this be done? And who wants to help with this issue.
I managed to rotate an SVG figure with svgutils module. It can be installed with pip.
>>> import svgutils
>>> svg = svgutils.transform.fromfile('camera.svg')
>>> originalSVG = svgutils.compose.SVG('camera.svg')
>>> originalSVG.rotate(90)
>>> originalSVG.move(svg.height, 10)
<svgutils.compose.SVG object at 0x7f11dc78fb10>
>>> figure = svgutils.compose.Figure(svg.height, svg.width, originalSVG)
>>> figure.save('svgNew.svg')
Note that width and height attributes must be specified in original svg file in svg tag
Reference I used
Actually, this method didn't do anything with elements except wrapping them all with g tag with transform attribute. But it seems that with this module you can access each and every element in SVG tree and do whatever you want with them.
Scaling an SVG is also easy:
>>> originalSVG.scale(2)
<svgutils.compose.SVG object at 0x7f11dc78fb10>
>>> figure = svgutils.compose.Figure(float(svg.height) * 2, float(svg.width) * 2, originalSVG)
>>> figure.save('svgNew.svg')

Doing OCR to identify text written on trucks/cars or other vehicles

I am new to the world of Computer Vision.
I am trying to use Tesseract to detect numbers written on the side of trucks.
So for this example, I would like to see CMA CGM as the output.
I fed this image to Tesseract via command line
tesseract image.JPG out -psm 6
but it yielded a blank file.
Then I read the documentation of Tesserocr (python wrapper of Tesseract) and tried the following code
with PyTessBaseAPI() as api:
api.SetImage(image)
boxes = api.GetComponentImages(RIL.TEXTLINE, True)
print 'Found {} textline image components.'.format(len(boxes))
for i, (im, box, _, _) in enumerate(boxes):
# im is a PIL image object
# box is a dict with x, y, w and h keys
api.SetRectangle(box['x'], box['y'], box['w'], box['h'])
ocrResult = api.GetUTF8Text()
conf = api.MeanTextConf()
print (u"Box[{0}]: x={x}, y={y}, w={w}, h={h}, "
"confidence: {1}, text: {2}").format(i, conf, ocrResult, **box)
and again it was not able to read any characters in the image.
My question is how should I go about solving this problem? ( I am not looking for a ready made code, but approach on how to go about solving this problem).
Would I need to train tesseract with sample images or can I just write code using existing libraries to somehow detect the co-ordinates of the truck and try to do OCR only within the boundaries of the truck?
Tesseract expects document-only images, but you have non-document objects in your image. You need a sophisticated segmentation(then probably some image processing) process before feeding it to Tesseract-OCR.
I have a three-step solution
Take the part of the image you want to recognize
Apply Gaussian-blur
Apply simple-thresholding
You can use a range to get the part of the image.
For instance, if you select the
height range as: from (int(h/4) + 40 to int(h/2)-20)
width range as: from int(w/2) to int((w*3)/4)
Result
Take Part
Gaussian
Threshold
Pytesseract
CMA CGM
Code:
import cv2
import pytesseract
img = cv2.imread('YizU3.jpg')
gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
(h, w) = gry.shape[:2]
gry = gry[int(h/4) + 40:int(h/2)-20, int(w/2):int((w*3)/4)]
blr = cv2.GaussianBlur(gry, (3, 3), 0)
thr = cv2.threshold(gry, 128, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
txt = pytesseract.image_to_string(thr)
print(txt)
cv2.imshow("thr", thr)
cv2.waitKey(0)