I needed to make an object in my game transparent, but it wasn't working properly. So, after some research, I later found out how to properly do alpha blending in Direct3D9 and implemented some code to make the object finally transparent. However, while I have a basic idea of it, I am still a bit confused on how it all works. I have done lots of research, but I have only found very vague answers which just left me more confused. So, what do these two lines of code really mean and do?
d3ddev->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
d3ddev->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);
Alpha blending in D3D is done according to this equation
Color = TexelColor x SourceBlend + CurrentPixelColor x DestBlend
The setting you quote sets the "SourceBlend" to the alpha value of the source texture. The value of DestBlend is set to one minus the alpha value of the source texture. This is a classic post-blending setting. This kind of mode is used to "anti-alias" an image, for example to blur the edges of a circle so that they do not look pixelated. It is quite good for things like smoke effects or for semi-transparent objects like plastics.
Why do have to specify it? Well if you set the DestBlend to always be one, you get a kind of "ghosting" effect, like when you partially see a reflection in a pane of glass. This effect is particularly good when used with an environment map. Pre-blending can also be useful, but requires you change the format of your inputs.
Related
so https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glBlendFunc.xhtml has one value that can only be used as source argument, GL_SRC_ALPHA_SATURATE and it is intended to be used as glBlendFunc(GL_SRC_ALPHA_SATURATE, GL_ONE) together with polygon antialiasing (GL_POLYGON_SMOOTH) on front-to-back sorted geometry and if I'm not mistaken it ensures that triangle sides that directly connect to another triangle will not get visible lines between them.
The documentation also states that
Destination alpha bitplanes, which must be present for this blend function to operate correctly, store the accumulated coverage.
So, while I know more or less what this functionality is used for, I'd really like to understand how it works exactly and how the alpha bitplanes are used (and does this mean that the factor depends on both source alpha and dest alpha?). Since this isn't really documented, I hope someone here (looking at #Nicol Bolas) can shed light on the math or implementation magic behind it?
And then one step further - is there any other usecase GL_SRC_ALPHA_SATURATE can be used for?
I have a function for generating a 3d-matrix with grey values (char values from 0 to 255). Now I want to generate a 3d-object out of this matrix, e.g. I want to display these values as a 3d-object (in cpp). What is the best way to do that platform-independent and as fast as possible?
I have already read a bit about using OGL, but then I run in the following problem: The matrix can contain up to $4\cdot10^9$ values. When I want to load the complete matrix into the RAM, it will collapse. So a direct draw from the matrix is impossible. Furthermore I only found functions for drawing 2d-images in OGL. Is there a way to draw 3d-pixels in OGL? Or should I rather use another approach?
I do not need a moving functionality (at least not at the moment), I just want to display the data.
Edit 2: For narrowing the question in: Is there a way to draw pixels in 3d-space with OGL taken from a 3d-matrix? I did not find a suitable function, I only found 2d-functions.
What you're looking to do is called volume rendering. There are various techniques to achieve it, and ultimately it depends on what you want it to look like.
There is no simple way to do this either. You can't just draw 3d pixels. You can draw using GL_POINTS and have each transformed point raster to 1 pixel, but this is probably completely unsatisfactory for you because it will only draw a some pixels to the screen (you wont see anything on big resolutions).
A general solution would be to just render a cube using normal triangles, for each point. Sort it back to front if you need alpha blending. If you want a more specific answer you will need to narrow your request. Ray tracing also has merits in volume rendering. Learn more on volume rendering.
I have decided to rewrite an old Zatacka clone of mine. The old thing, running under Allegro 4, utilizes logic bitmap, that is a bitmap used for non-display purposes, reflecting directly the visible "what's on the screen and doesn't move" bitmap, but the integers stored in it represent logical meaning of things on screen, because the game got quite colorful. So the things that players see may be of any color possible, but game just remembers what kind of object each pixels represents.
The new clone is not supposed to use Allegro, so I could write the logic bitmap code myself. That said, I would appreciate if someone suggested some more efficient and precise alternatives.
Structure must be able to be kept up with bitmap/texture visible to players. Think about Worms game, but utilizing player invisible ground type variations, or something. In addition, following methods must be implemented:
Checking if all pixels in a circle belong to a small (~6) set of "colors" given as a parameter.
Painting all pixels in a circle with a single "color".
Painting all pixels in a circle, (except/only) ones in small set of "colors" provided as a parameter, with a single "color".
Painting a silhouette of rotated, preprocessed if you wish so, bitmap with a single "color". (That's the tricky one: would interpreting the bitmap as a stupid polygon with loads of right angles do the job?)
This is the minimum. If your structure supports shapes other than circles, that's great.
I am getting some weird distortion of my CCLabelBMFont labels in Cocos2D, as noted here:
The distortions appear on both iPad device and simulator. Notable points about this:
I have other labels using the same font file that are not showing this
I have made sure the coordinates of the labels are all integers, no floats
there is no scaling of the labels
I have tried with and without [label.texture setAliasTexParameters]; no difference
If I move the label to a different coordinate, it sometimes corrects the distortion
Any idea what could be going on?
UPDATE: I changed my label to a TTF label, and the issue remains! Even when no font file is used, the distortion is appearing.
Some digging on Cocos2d forums led me to add this:
[[CCDirector sharedDirector] setProjection:CCDirectorProjection2D];
Seems to resolve the issue. Anyone know if this has other undesired side-effects, since this is not the default projection in Cocos2d.
UPDATE This solved my issue on iOS 4 only but my issue persists on iOS 5. I am now seeing that the distortion can be removed by adjusting the anchor point of the label, so it seems to be affected by that. Probably a bug?
UPDATE 2 It turns out that my symptoms were caused by two different things. The projection did in fact make a difference with some sorts of distortion and on all iOS versions, so this above code is useful. But second, I found a conditional statement that sets the position of the font label and it was not always creating integer coordinates. So, by placing (int) in front of the x and y parts of the coordinate, the issue resolved. Sprites can handle floating point coordinates without distortion, but CClabels cannot, it seems.
Add some spacing around each character. This is normally caused by other nearby glyphs from the texture atlas "bleeding" into one another due to texture filtering. Both Glyph Designer and Hiero allow you to specify spacing, typically a value of two pixels between each glyph is enough to stop bleeding.
I have a game engine that uses OpenGL for display.
I coded a small menu for it, and then I noticed something odd after rendering the text.
http://img43.imageshack.us/i/garbagen.png/
As you see, the font is somewhat unreadable, but the lower parts (like "Tests") look as intended.
Seems the position on the screen affects readability, as the edges get cut.
The font is 9x5, a value that I divide by 2 to obtain width/height and render the object from the center.
So, with 4.5x2.5 pixels (I use floats for x, y, width and height of simple rectangles), the texture is messed up if rendered somewhere other than x.5 or so. However, it only does so in two computers for now, but I would dislike this error to come out since it makes text unreadable. I can make it 4.55x2.55 (by adding a bit of extra size when dividing by 2), and then it renders adequately in all machines (or at least doesn't happen as often in the problematic two), but I fear this is a hack too gross to keep it and doesn't solve the issue entirely, and it might scale the text up making the font look..."fat".
So my question is...is there any way I can prevent this, and not exchanging those values to integers? (I need the slight differences floats offers in comparison). Can I find out which width/heights are divisible by two, and those that aren't, handle them differently? If it's indeed a video card issue, would it be possible to circumvent it?
Sorry if there's anything lacking for the question, I don't resort to questioning the internet often and I have no coding studies. I'll be happy to provide any line or chunk of code that might be required.
If you have to draw your text at non-integer coordinates, you should enable texture filtering. Use glTexParameterfv to set GL_TEXTURE_MIN_FILTER and GL_TEXTURE_MAG_FILTER to GL_LINEAR. Your text will be blurred, but that cannot be avoided without resorting to pixel perfect (=integer) coordinates.
Note that your 0.05 workaround does nothing but change the way the effective coordinates are rounded to integers. When using GL_NEAREST texture filtering, there's no such thing as a half pixel offset. Even if you specify these coordinates, the texture filter will round them for you. You just push it in the right direction with the additional 0.05.
For best reliability I would find a way to eliminate the fractions. I only have a little experience with XNA and MDX, so I don't know if there is a good reason, but why are you going by the center rather than corner?
Trying to do pixel-perfect stuff like this can be hard in OpenGL due to different resolutions, texture filtering etc.
Some things you could try:
Draw your font into one large texture (say 512x512).
Draw the glyphs larger than you need and anti-alias using the alpha channel (transparency).
Leave some blank space (4 or 8 pixels) around each glyph. If you have them pushed up right against eachother (like you would if you were drawing a font for software-rendering back in the DOS days), then filtering will make them bleed into eachother.
Or you could take a different approach and make them out of line segments. This may work better for fonts on the scales you're dealing with.