Why does get_linesize() incorrectly return 1 for .fon fonts? - python-2.7

I've had this strange thing happen in Pygame1.9.1's font module, where .get_linesize() returns 1 for fonts whose glyph height (let alone per-line height) renders greater than 1. It only happens with .fon fonts.
Here are two examples, plus a third font that does work, for the sake of a control. I've run these from IDLE's shell, but the same thing happens with proper modules.
In any case, Pygame has already been initialized.
>>> testfont = pygame.font.Font("C:/Windows/Fonts/vga850.fon", 12)
>>> testfont.get_linesize() # This one returns 1. For 'Terminal Regular'
1
>>> otherfont = pygame.font.Font("C:/Windows/Fonts/vgafix.fon", 18)
>>> otherfont.get_linesize() # This also returns 1. For 'Fixedsys Regular'
1
>>> lastfont = pygame.font.Font("C:/Windows/Fonts/OCRAStd.otf", 24)
>>> lastfont.get_linesize() # This returns the correct value. For 'OCR A Std Regular'
29
>>> textsurf = testfont.render("This is a nightmare!", True, (0,0,0))
>>> textsurf.get_size()[1] # Let's get the height of this surface...
12
>>> othersurf = otherfont.render("An inescapable nightmare!", False, (0,0,0))
>>> othersurf.get_size()[1] # This one, too. Antialiasing makes no difference.
15
>>> lastsurf = lastfont.render("You're okay, OCRA.", True, (0,0,0))
>>> lastsurf.get_size()[1] # And finally, the control...
25
The height of the render for the control is a little shorter, since get_linesize() includes a gap between lines for aesthetic reasons.
<Font>.size("sample string")[1] works correctly, so that's been my stopgap for line height.
All three fonts render correctly.
The common thread with the fonts that do not respond properly to <Font>.get_linesize() is that they all share the extension .fon, so the easy 'solution' is simply, "Do not use .get_linesize() with .fon fonts; use .size('sample')[1] + some_adjustment instead."
This, however, is somewhat inelegant and (worse still!) terribly boring, and I'm much more interested to know what causes this problem, and if there is a way to make these fonts work with get_linesize() as they should.
I looked through Pygame's documentation and couldn't find anything to do with this issue, and a number of web searches proved fruitless as well.

Pygame's Font support depends on SDL_ttf which in turn depends on freetype2. I suspect there's some problem in calculating the value for line height as the font goes through those layers (specifically the SDL_ttf layer, as freetype2 just reads the font data).
Consider using pygame.freetype.Font instead of pygame.Font. The freetype module skips the SDL_ttf layer and works with freetype directly. There is a much richer set of options for fonts there. I'd try get_sized_height() as a replacement for get_linesize().
If that still doesn't work, most likely it is a freetype bug in reading FON files, or possibly the FON files themselves don't have the value set correctly.

Related

sdl ttf_rendertext_blended fails randomly

EDIT: Even that the problem still exists, I haven't been able to reproduce this frequently enough to examine it closer. See more info at the end of the question.
I started to develop a game, and I am currently writing basic library for it. I'm using D programming language with SDL-2 and OpenGL 3 (using Derelict3 bindings), on Linux Mint 13 (Maya). Compiler is DMD64 D Compiler v2.067.1, and I rebuild binary each time with 'rdmd'.
To render (changing) text, I create glyphs on-demand. The piece of code I use for this is:
class Font {
...
Texture render(char c) {
if(!(c in rendered)) rendered[c] = texture(to!string(c));
return rendered[c];
}
Texture texture(string text) {
SDL_Color color={255, 255, 255, 255};
auto bitmap = TTF_RenderText_Blended(
font,
std.string.toStringz(text),
color
);
if(!bitmap) {
throw new TTFError(
"TTF_RenderText_Blended: " ~
to!string(TTF_GetError()) ~ ": '" ~ text ~ "'"
);
}
auto texture = new Texture(bitmap);
SDL_FreeSurface(bitmap);
return texture;
}
The problem is that this fails purely randomly. Sometimes it works without any problems. When it fails to render a glyph, it is interesting that it will fail to render the same glyph over and over again. Here is an example when catching the exception I throw:
...
TTF_RenderText_Blended: Text has zero width: '9'
TTF_RenderText_Blended: Text has zero width: '6'
TTF_RenderText_Blended: Text has zero width: '9'
TTF_RenderText_Blended: Text has zero width: '6'
TTF_RenderText_Blended: Text has zero width: '9'
TTF_RenderText_Blended: Text has zero width: '6'
...
(I'm printing score to screen, other numbers showing fine except those few ones). The numbers TTF_RenderText_Blended fails to render vary from run to run, and as mentioned, time to time it renders all the numbers.
One detail is that static strings I render before entering game loop have not yet failed to render, just single letters I use for changing texts.
I'm pretty much out of any ideas, and haven't found anything related to this problem by searching web. Any ideas to look for solutions are very well appreciated.
CURRENT SITUATION: I updated compiler to DMD 2.067.1 and the problem remains (compilers used so far: 2.066.1, 2.067.1). The whole - lets say project family is in the github at the moment:
https://github.com/mkoskim/games
The text glyph rendering function is located in this file:
https://github.com/mkoskim/games/blob/master/engine/ext/font.d
...and it is used from here:
https://github.com/mkoskim/games/blob/master/engine/ext/gui/label.d
The problem occurs mainly/most frequently in the pacman game (although very seldomly just right now):
https://github.com/mkoskim/games/tree/master/testbench/pacman
If you want to try it out, first, read the (hopefully complete enough) installation instructions:
https://github.com/mkoskim/games/blob/master/INSTALL
The project is made for 64-bit Linux Mint Maya, and it is currently not that user friendly and portable from building perspective. Pacman is the only demo that (hopefully) works without game controller. After successful installation of required libraries and tools, you can build it with command:
games/testbench/pacman$ make default
I ran into the exact same issue, and for me it was fixed by keeping the SDL_RWops structure used to create the font (with TTF_OpenFontRW) alive for the whole lifetime of the TTF_Font created by it. I saw you're creating the font with TTF_OpenFontRW as well so I assume this will fix it for you as well. It looks like SDL_ttf relies on this being kept alive, it reads freed memory otherwise.
I know this question is a little bit outdated but I maybe had a similar problem.
I fixed it with simply call SDL_DestroyTexture() every time befor I used TTF_Render_Text_Blended() :)

cv2.imread always returns NoneType

cv2.imread is always returning NoneType.
I am using python version 2.7 and OpenCV 2.4.6 on 64 bit Windows 7.
Maybe it's some kind of bug or permissions issue because the exact same installation of python and cv2 packages in another computer works correctly. Here's the code:
im = cv2.imread("D:\testdata\some.tif",CV_LOAD_IMAGE_COLOR)
I downloaded OpenCV from http://www.lfd.uci.edu/~gohlke/pythonlibs/#opencv. Any clue would be appreciated.
First, make sure the path is valid, not containing any single backslashes. Check the other answers, e.g. https://stackoverflow.com/a/26954461/463796.
If the path is fixed but the image is still not loading, it might indeed be an OpenCV bug that is not resolved yet, as of 2013. cv2.imread is not working properly under Win32 for me either.
In the meantime, use LoadImage, which should work fine.
im = cv2.cv.LoadImage("D:/testdata/some.tif", CV_LOAD_IMAGE_COLOR)
In my case the problem was the spaces in the path. After I moved the images to a path with no spaces it worked.
Try changing the direction of the slashes
im = cv2.imread("D:/testdata/some.tif",CV_LOAD_IMAGE_COLOR)
or add r to the begining of the string
im = cv2.imread(r"D:\testdata\some.tif",CV_LOAD_IMAGE_COLOR)
I also met the same issue before on ubuntu 18.04.
cv2.imread(path)
I solved it when I changed the path argument from Relative_File_Path to Absolute_File_Path.
Hope it be useful.
just stumbled upon this one.
The solution is very simple but not intuitive.
if you use relative paths, you can use either '\' or '/' as in test\pic.jpg or test/pic.jpg respectively
if you use absolute paths, you should only use '/' as in /.../test/pic.jpg for unix or C:/.../test/pic.jpg for windows
to be on the safe side, just use for root, _, files in os.walk(<path>): in combination with abs_path = os.path.join(root, file). Calling imread afterwards, as in img = ocv.imread(abs_path) is always going to work.
In case no one mentioned in this question, another way to workaround is using plt to read image, then convert it to BGR format.
img=plt.imread(img_path)
print(img.shape)
img=img[...,::-1]
it has been mentioned in
cv2.imread does not read jpg files
This took a long time to resolve. first make sure that the file is in the directory and check that even though windows explorer says the file is "JPEG" it is actually "JPG". The first print statement is key to making sure that the file actually exists. I am a total beginner, so if the code sucks, so be it.
The code, just imports a picture and displays it . If the code finds the file, then True will be printed in the python window.
import cv2
import sys
import numpy as np
import os
image_path= "C:/python27/test_image.jpg"
print os.path.exists(image_path)
CV_LOAD_IMAGE_COLOR = 1 # set flag to 1 to give colour image
CV_LOAD_IMAGE_COLOR = 0 # set flag to 0 to give a grayscale one
img = cv2.imread(image_path,CV_LOAD_IMAGE_COLOR)
print img.shape
cv2.namedWindow('Display Window') ## create window for display
cv2.imshow('Display Window', img) ## Show image in the window
cv2.waitKey(0) ## Wait for keystroke
cv2.destroyAllWindows() ## Destroy all windows
I had a similar problem, changing the name of the image to English alphabetic worked for me. Also, it didn't work with a numeric name (e.g. 1.jpg).
My OS is Windows 10. I noticed imread is very sensitive to path. No any recommendation about slashes worked for me, so how I managed to solve problem: I have placed file to project folder and typed:
img = cv2.imread("MyImageName.jpg", 0)
So without any path and folder, just file name. And that worked for me.
Also try different files from different sources and of different formats
I spent some time on this only to find that this error is caused by a broken image file on my case. So please manually check your file to make sure it is valid and can be opened by common image viewers.
I had a similar issue,changing direction of slashes worked:
Change / to \
In my case helped changing file names to latin alphabet.
Instead of renaiming all files I wrote a simple wrapper to rename a file before the load into a random guid and right after the load rename it back.
import os
import uuid
import cv2
uid = str(uuid.uuid4())
def wrap_file_rename(my_path, function):
try:
directory = os.path.dirname(my_path)
new_full_name = os.path.join(directory, uid)
os.rename(my_path, new_full_name)
return function(new_full_name)
except Exception as error:
logger.error(error) # use your logger here
finally:
os.rename(new_full_name, my_path)
def my_image_read(my_path, param=None):
return wrap_file_rename(my_path, lambda p: cv2.imread(p) if param is None else cv2.imread(p, param))
Sometimes the file is corrupted. If it exists and cv2.imread returns None this may be the case.
Try opening the file כfrom file explorer and see if that works
I've run into this. Turns out the PIL module provides this functionality.
Similarly, numpy.imread and scipy.misc.imread both didn't exist until I installed PIL
In my configuration (win7 python2.7), that was done as follows:
cd /c/python27/scripts
easy_install PIL

Loading an image using Pyglet

I am playing around with pyglet 1.2alpha-1 and Python 3.3. I have the following (extremely simple) application and cannot figure out what my issue is:
import pyglet
window = pyglet.window.Window()
#image = pyglet.resource.image('img1.jpg')
image = pyglet.image.load('img1.jpg')
label = pyglet.text.Label('Hello, World!!',
font_name='Times New Roman',
font_size=36,
x=window.width//2, y=window.height//2,
anchor_x='center', anchor_y='center')
#window.event
def on_draw():
window.clear()
label.draw()
# image.blit(0,0)
pyglet.app.run()
With the above code, my text label will appear as long as image.blit(0, 0) is commented out. However, if I try to display the image, the program crashes with the following error:
File "C:\Python33\lib\site-packages\pyglet\gl\lib.py", line 105, in errcheck
raise GLException(msg)
pyglet.gl.lib.GLException: b'invalid value'
I also get the above error if I try to use pyglet.resource.image instead of pyglet.image.load (the image and py file are in the same directory).
Any one know how I can fix this issue?
I am using Python 3.3, pyglet 1.2alpha-1, and Windows 8.
The code -including the image.blit- runs fine for me. I'm using python 2.7.3, pyglet 1.1.4
There's nothing wrong with the code. You might consider trying other python and pyglet versions for the time being (until pyglet has a new stable release)
This isn't a "fix", but might at least determine if it's fixable or not (mine was not). (From the Pyglet mailing group.)
You can verify whether the system does not even support Textures greater than 1024, by running this code (Python 3+):
from ctypes import c_long
from pyglet.gl import glGetIntegerv, GL_MAX_TEXTURE_SIZE
i = c_long()
glGetIntegerv(GL_MAX_TEXTURE_SIZE, i)
print (i) # output: c_long(1024) (or higher)
That is the maximum texture size your system supports. If it's 1024, then any larger pictures will raise an Exception. (And the only fix is, get a better system).

View a sequence of images using Python and NumPy

I am using python's pil library to display images. Now I have a sequence of frames to display as a video content. I have a np.array that contains the RGB values of all the frames.
Could not find a method similar to Mathlabs implay to display these frames.
I can use imshow in a loop, but thats would be very slow as I need to mention framerate.
Matplotlib animations work well, and is easy to use. For reasonable size images they typically run at 30fps, or around that. Matplotlib 1.1+ has a nice new animation interface: here are some examples and a tutorial.
Older versions of matplotlib aren't to hard to animate either (you basically just set the data directly and refresh the plot) but the animation depends a bit more on the backend, so you need to look for an appropriate example.
For a specific example, if images is your list of matplotlib images that you want to animate, you can simply do:
animation.ArtistAnimation(fig, images, interval=50, blit=True, repeat_delay=1000)
This, btw, is taken from this example, if you want to also see the code that generates test images. The code to animate is simply the line above.
I have implemented a handy script that just suits your need. Try it out here
An example to show lists of images in a directory will be
import os
import glob
from scipy.misc import imread
img_dir = 'YOUR-IMAGE-DIRECTORY'
img_files = glob.glob(os.path.join(video_dir, '*.jpg'))
def redraw_fn(f, axes):
img_file = img_files[f]
img = imread(img_file)
if not redraw_fn.initialized:
redraw_fn.im = axes.imshow(img, animated=True)
redraw_fn.initialized = True
else:
redraw_fn.im.set_array(img)
redraw_fn.initialized = False
videofig(len(img_files), redraw_fn, play_fps=30)
If you happen to already have a working OpenCV install built with OpenEXR – which if you don’t, it’s at least as much of an irritating time-sink to rebuild OpenCV as e.g. compiling SciPy from source in the first place – but so if the library is already there and working, you can use the Python bindings to quickly† view images with only a little bit of boilerplate. From this example:
import OpenEXR, Imath, cv
filename = "GoldenGate.exr"
exrimage = OpenEXR.InputFile(filename)
dw = exrimage.header()['dataWindow']
(width, height) = (dw.max.x - dw.min.x + 1, dw.max.y - dw.min.y + 1)
def fromstr(s):
mat = cv.CreateMat(height, width, cv.CV_32FC1)
cv.SetData(mat, s)
return mat
pt = Imath.PixelType(Imath.PixelType.FLOAT)
(r, g, b) = [fromstr(s) for s in exrimage.channels("RGB", pt)]
bgr = cv.CreateMat(height, width, cv.CV_32FC3)
cv.Merge(b, g, r, None, bgr)
cv.ShowImage(filename, bgr)
cv.WaitKey()
I believe the OpenCV matrix type implements the python interfaces for memoryview et al – don’t be scared off by those objects as they’re NumPy arrays with different socks on, if you will.
†) quickly, w/r/t both the developer sense of speed: you can use this stuff immediately instead of building SciPy addons or mucking about with the Python array view C interface; but also in the real sense, as everything that comprises the aforementioned stuff – the OpenCV matrix structs, their related Python C API underpinnings, the OpenEXR format, and the stock implementation of the interface to same – have been raked over the optimization coals for years, largely by notable and grant-backed squadrons of specialist scholar-nerds who know what they are doing in this arena.

How do I set the DPI of a scan using TWAIN in C++

I am using TWAIN in C++ and I am trying to set the DPI manually so that a user is not displayed with the scan dialog but instead the page just scans with set defaults and is stored for them. I need to set the DPI manually but I can not seem to get it to work. I have tried setting the capability using the ICAP_XRESOLUTION and the ICAP_YRESOLUTION. When I look at the image's info though it always shows the same resolution no matter what I set it to using the ICAPs. Is there another way to set the resolution of a scanned in image or is there just an additional step that needs to be done that I can not find in the documentation anywhere?
Thanks
I use ICAP_XRESOLUTION and the ICAP_YRESOLUTION to set the scan resolution for a scanner, and it works at least for a number of HP scanners.
Code snipset:
float x_res = 1200;
cap.Cap = ICAP_XRESOLUTION;
cap.ConType = TWON_ONEVALUE;
cap.hContainer = GlobalAlloc(GHND, sizeof(TW_ONEVALUE));
if(cap.hContainer)
{
val_p = (pTW_ONEVALUE)GlobalLock(cap.hContainer);
val_p->ItemType = TWTY_FIX32;
TW_FIX32 fix32_val = FloatToFIX32(x_res);
val_p->Item = *((pTW_INT32) &fix32_val);
GlobalUnlock(cap.hContainer);
ret_code = SetCapability(cap);
GlobalFree(cap.hContainer);
}
TW_FIX32 FloatToFIX32(float i_float)
{
TW_FIX32 Fix32_value;
TW_INT32 value = (TW_INT32) (i_float * 65536.0 + 0.5);
Fix32_value.Whole = LOWORD(value >> 16);
Fix32_value.Frac = LOWORD(value & 0x0000ffffL);
return Fix32_value;
}
The value should be of type TW_FIX32 which is a floating point format defined by twain (strange but true).
I hope it works for you!
It should work the way.
But unfortunately we're not living in a perfect world. TWAIN drivers are among the most buggy drivers out there. Controlling the scanning process with TWAIN has always been a big headache because most drivers have never been tested without the scan dialog.
As far as I know there is also no test-suite for twain-drivers, so each of them will behave slightly different.
I wrote an OCR application back in the 90th and had to deal with these issues as well. What I ended up was having a list of supported scanners and a scanner module with lots of hacks and work-arounds for each different driver.
Take the ICAP_XRESOLUTION for example: The TWAIN documentation sais you have to send the resolution as a 32 bit float. Have you tried to set it using an integer instead? Or send it as float but put the bit-representation of an integer into the float, or vice versa. All this could work for the driver you're working with. Or it could not work at all.
I doubt the situation has changed much since then. So good luck getting it working on at least half of the machines that are out there.