Distorted Fonts - cocos2d-iphone

I am getting some weird distortion of my CCLabelBMFont labels in Cocos2D, as noted here:
The distortions appear on both iPad device and simulator. Notable points about this:
I have other labels using the same font file that are not showing this
I have made sure the coordinates of the labels are all integers, no floats
there is no scaling of the labels
I have tried with and without [label.texture setAliasTexParameters]; no difference
If I move the label to a different coordinate, it sometimes corrects the distortion
Any idea what could be going on?
UPDATE: I changed my label to a TTF label, and the issue remains! Even when no font file is used, the distortion is appearing.

Some digging on Cocos2d forums led me to add this:
[[CCDirector sharedDirector] setProjection:CCDirectorProjection2D];
Seems to resolve the issue. Anyone know if this has other undesired side-effects, since this is not the default projection in Cocos2d.
UPDATE This solved my issue on iOS 4 only but my issue persists on iOS 5. I am now seeing that the distortion can be removed by adjusting the anchor point of the label, so it seems to be affected by that. Probably a bug?
UPDATE 2 It turns out that my symptoms were caused by two different things. The projection did in fact make a difference with some sorts of distortion and on all iOS versions, so this above code is useful. But second, I found a conditional statement that sets the position of the font label and it was not always creating integer coordinates. So, by placing (int) in front of the x and y parts of the coordinate, the issue resolved. Sprites can handle floating point coordinates without distortion, but CClabels cannot, it seems.

Add some spacing around each character. This is normally caused by other nearby glyphs from the texture atlas "bleeding" into one another due to texture filtering. Both Glyph Designer and Hiero allow you to specify spacing, typically a value of two pixels between each glyph is enough to stop bleeding.

Related

Question About Alpha Blending in Direct3D9

I needed to make an object in my game transparent, but it wasn't working properly. So, after some research, I later found out how to properly do alpha blending in Direct3D9 and implemented some code to make the object finally transparent. However, while I have a basic idea of it, I am still a bit confused on how it all works. I have done lots of research, but I have only found very vague answers which just left me more confused. So, what do these two lines of code really mean and do?
d3ddev->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
d3ddev->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);
Alpha blending in D3D is done according to this equation
Color = TexelColor x SourceBlend + CurrentPixelColor x DestBlend
The setting you quote sets the "SourceBlend" to the alpha value of the source texture. The value of DestBlend is set to one minus the alpha value of the source texture. This is a classic post-blending setting. This kind of mode is used to "anti-alias" an image, for example to blur the edges of a circle so that they do not look pixelated. It is quite good for things like smoke effects or for semi-transparent objects like plastics.
Why do have to specify it? Well if you set the DestBlend to always be one, you get a kind of "ghosting" effect, like when you partially see a reflection in a pane of glass. This effect is particularly good when used with an environment map. Pre-blending can also be useful, but requires you change the format of your inputs.

Pixel shader in Direct2D render error along the middle

I've created a pixel shader for Direct2D that blurs along edges with an alpha channel lower then 1.0.
For every pixel I sample a blurRadius of pixels up, down, left and right. In the middle (vertically and horizontally) some render errors occur. These go away when i only sample pixels backward of the pixel I'm processing. I think my input image (size 1920x1080) is chopped up and I'm sampling outside the piece it's in.
Does anyone have a clue what is going on, if my assumption is right and what to do about it? See below for the resulting image. The dark lines are not supposed to be there and go away when I only sample surrounding pixels at the left or top.
I finally fixed this and I hope this can help others.
I found out that the problem only occurred when using a different input size than the output size. I had created my effect based on the sample effect from Microsoft which can be found here. The problem with this is that it maps the pInputRects[0] tot the pOutputRect in MapOutputRectToInputRects. This messes up the sampling in effect somehow. The fix is to set the pInputRects[0] to the pInputRects[0] from the MapInputRectsToOutputRect function.

Matplotlib axis text coordinates inconsistency?

I'm working on a piece of code to automatically align x-axis labels for a variable number of subplots. When I started having trouble setting label positions manually, I checked to be sure I could just transform from one set of coordinates to the other without changing anything, with a code snippet like this:
# xaxes is a list of Axes objects
textCoords = [ax.xaxis.get_label().get_position() for ax in xaxes]
newCoords = [ax.transAxes.inverted().transform(ax.xaxis.get_label().\
get_transform().transform(c)) for ax,c in zip(xaxes,textCoords)]
for ax,c in zip(xaxes,newCoords): ax.xaxis.set_label_coords(*c)
In theory, this code doesn't change any coordinates; it just gets the coordinates of each label, maps it to Axes coordinates using the Text object's internally-stored transform, and then sets the position. Yet running this code removes my labels entirely, and a little experimentation shows that they go off the bottom edge of the plot.
Have I just misunderstood the transforms involved here?
You're understanding the transforms correctly, but there's a caveat to using display coordinates before the plot has been displayed.
The short answer is that putting in a call to plt.draw() before your code snippet above will fix your immediate problem.
You're trying to link the different axes display system through display coordinates. However, before the plot has been drawn the first time, the renderer isn't fully initialized yet, and the display coordinates don't have much meaning.
Can you elaborate a bit more on what you're trying to do? There may be an easier way.
Alternatively, if you want to avoid the extra draw, you can link things through figure coordinates before the plot has been drawn. (They're well defined regardless.)

Cocos2d - hiding, clipping or masking sprite like in game Candy Crush

I have a problem and I saw it also in the game Candy Crush Saga, where they successfully dealt with it. I would like the sprite to show only when it is in the board (see image link below). The board can be of different shapes, like the levels in the mentioned game.
Has anyone some ideas how can this be achieved with Cocos2d?
I will be very glad if someone has some tips.
Thank you in advance.
image link: http://www.android-games.fr/public/Candy-Crush-Saga/candy-crush-saga-bonus.jpg
In Cocos2d you can render sprites at different z levels. Images at a lower z-level will be drawn first by the graphic card and the images (sprites) with a higher z-value will be drawn later. Hence if an image (say A) is at the same position of the other but has a higher z-value you will see only the pixels of image A where the two images intersect.
Cocos2d uses also layers so you can decide to add Sprites to a layer and set the layer to a specific z value. I expect that they used a layer for the board (say at z=1) with a PNG image containing transparent bits in the area where you can see the sprites, and a second layer at z=0 for the sprites. In this way you can only see the sprites when they are in the transparent area.
Does this help?
I found out Cocos2d has a class CCClippingNode which does exatclly what I wanted. First I thought it can clip only rectangular areas, but after some research I found it can clip also paths.

OpenGL: distorted textures when not divisible by 2

I have a game engine that uses OpenGL for display.
I coded a small menu for it, and then I noticed something odd after rendering the text.
http://img43.imageshack.us/i/garbagen.png/
As you see, the font is somewhat unreadable, but the lower parts (like "Tests") look as intended.
Seems the position on the screen affects readability, as the edges get cut.
The font is 9x5, a value that I divide by 2 to obtain width/height and render the object from the center.
So, with 4.5x2.5 pixels (I use floats for x, y, width and height of simple rectangles), the texture is messed up if rendered somewhere other than x.5 or so. However, it only does so in two computers for now, but I would dislike this error to come out since it makes text unreadable. I can make it 4.55x2.55 (by adding a bit of extra size when dividing by 2), and then it renders adequately in all machines (or at least doesn't happen as often in the problematic two), but I fear this is a hack too gross to keep it and doesn't solve the issue entirely, and it might scale the text up making the font look..."fat".
So my question is...is there any way I can prevent this, and not exchanging those values to integers? (I need the slight differences floats offers in comparison). Can I find out which width/heights are divisible by two, and those that aren't, handle them differently? If it's indeed a video card issue, would it be possible to circumvent it?
Sorry if there's anything lacking for the question, I don't resort to questioning the internet often and I have no coding studies. I'll be happy to provide any line or chunk of code that might be required.
If you have to draw your text at non-integer coordinates, you should enable texture filtering. Use glTexParameterfv to set GL_TEXTURE_MIN_FILTER and GL_TEXTURE_MAG_FILTER to GL_LINEAR. Your text will be blurred, but that cannot be avoided without resorting to pixel perfect (=integer) coordinates.
Note that your 0.05 workaround does nothing but change the way the effective coordinates are rounded to integers. When using GL_NEAREST texture filtering, there's no such thing as a half pixel offset. Even if you specify these coordinates, the texture filter will round them for you. You just push it in the right direction with the additional 0.05.
For best reliability I would find a way to eliminate the fractions. I only have a little experience with XNA and MDX, so I don't know if there is a good reason, but why are you going by the center rather than corner?
Trying to do pixel-perfect stuff like this can be hard in OpenGL due to different resolutions, texture filtering etc.
Some things you could try:
Draw your font into one large texture (say 512x512).
Draw the glyphs larger than you need and anti-alias using the alpha channel (transparency).
Leave some blank space (4 or 8 pixels) around each glyph. If you have them pushed up right against eachother (like you would if you were drawing a font for software-rendering back in the DOS days), then filtering will make them bleed into eachother.
Or you could take a different approach and make them out of line segments. This may work better for fonts on the scales you're dealing with.