I'm trying to alpha blend sprites and backgrounds with devkitPro (including libnds, libarm, etc).
Does anyone know how to do this?
As a generic reference, i once wrote a small blog entry about that issue. Basically, you first have to define which layer is alpha-blended against which other layer(s). Afaik,
the source layer(s) must be over destination layer(s) to have some blending displayed. that means the priority of source layers should be numerically lower than the the priority of destination layers.
the source layer is what is going to be translucent, the destination(s) is what is going to be seen through (and yes, i find this rather confusing).
For the sprites, specifically, you then have 3 ways to achieve alpha-blending depending on what you need and what you're "ready to pay" for it:
You can make all the sprites have some alpha-blending by turning on BLEND_SRC_SPRITE in REG_BLDCNT[_SUB] ... not that useful.
You can selectively turn on blending of some sprites by using ATTR0_TYPE_BLENDED. The blending level will be the same for all sprites (and layers)
bitmap-type sprites use direct colors (bypassing the palettes), so the ATTR2_PALETTE() field of GBA sprites is useless and has been recycled into ATTR2_ALPHA.
Sprites on the DS can be alpha blended using the blend control registers. TONC gives the necessary information for getting blending working on the main screen because the register locations are the same. Alpha blending on the subscreen uses the same process with different registers at a 1000h offset.
The registers you'll be looking at are REG_BLDMOD, REG_COLV, and REG_COLY for the main screen and REG_BLDMOD_SUB, REG_COLV_SUB, and REG_COLY_SUB for the sub screen.
Also remember that you'll have to change the sprite's graphic mode to enable blending per sprite.
It's been a long time since I've done any GBA programming, but as I recall, the DS supports most (if not all) of the stuff that GBA supports. This link has a section on how to do alpha blending for GBA (section 13.2). I don't know if there's a DS-specific way of doing it, but this should work for you.
Related
I'm building an application that is drawing an anaglyph (stereoimage) on 200 Hz screen based on two provided pictures (NOT 3D model). So speed integity of redrawing is very important. I've achieved the best results with DirectDraw surfaces and their Flip() (switching current surface's image to secondary one):
(void) lpddsPrimary->Flip(nullptr, DDFLIP_WAIT);
But DirectDraw is very outdated and I look for a way to reimplement this functionality based on modern DirectX libraries. But I really don't want to create a quad, draw picture as it's texture, calculate 3D projection matrices just to output 2D images.
I would be really greatful for any snippet of how this can be possibly done with DirectX. Thanks in advance.
For your purposes you can use DXGI and avoid D3D completely. You don't say how you get the data into the backbuffer, but DXGI allows you to create a swapchain, flip it (Present), and access the surfaces (e.g. lock them - it's called Map now). For 3D you need the "1" versions e.g. DXGISwapChain1. See http://msdn.microsoft.com/en-us/library/windows/desktop/bb205075(v=vs.85).aspx.
Note that DXGISwapChain1 is a subclass of DXGISwapChain, and some vital methods such as GetBuffer are in the base interface.
I plan on making a game (in SDL) where, if one character moves, the part of the image it was on turns alpha, thus allowing me to place a scrolling image underneath the original scene.
1) Is this possible?
2) If yes to #1, how can I go about implementing this (not to give me code, but to guide me in the right direction).
It sounds like you want to learn about image compositing.
A typical game these days will have a redraw function somewhere to redraw the entire screen. The entire scene is always redrawn each frame.
void redraw()
{
drawBackground();
drawCharacters();
drawHUD();
swapBuffers();
}
This is as simple as it gets: by using the right blending modes, each time you draw something it appears on top of what was drawn before. Older games are much more complicated because they don't redraw the entire screen at a time (or don't use a framebuffer), and newer games are much more complicated because they draw the world front-to-back and back-to-front in multiple passes for different types of objects.
SDL has software image compositing functions which you can use, or you can use OpenGL (which may use a combination of software and hardware). I personally use OpenGL because it is more powerful (lets you draw more complicated scenes), but the SDL compositing functions are easier to use. There are many excellent tutorials and many more mediocre or terrible tutorials online.
I'm not sure what you mean when you say "the part of the image it was on turns alpha". The alpha channel does not appear on screen, you cannot see it, it just affects how two images are composited.
I am rewriting an opengl-based gis/mapping program. Among other things, the program allows you to load raster images of nautical charts, fix them to lon/lat coordinates and zoom and pan around on them.
The previous version of the program uses a custom tiling system, where in essence it manually creates mipmaps of the original image, in the form of 256x256-pixel tiles at various power-of-two zoom levels. A tile for zoom level n - 1 is constructed from four tiles from zoom level n, using a simple average-of-four-points algorithm. So, it turns off opengl mipmapping, and instead when it comes time to draw some part of the chart at some zoom level, it uses the tiles from the nearest-match zoom level (i.e., the tiles are in power-of-two zoom levels but the program allows arbitrary zoom levels) and then scales the tiles to match the actual zoom level. And of course it has to manage a cache of all these tiles at various levels.
It seemed to me that this tiling system was overly complex. It seemed like I should be able to let the graphics hardware do all of this mipmapping work for me. So in the new program, when I read in an image, I chop it into textures of 1024x1024 pixels each. Then I fix each texture to its lon/lat coordinates, and then I let opengl handle the rest as I zoom and pan around.
It works, but the problem is: My results are a bit blurrier than the original program, which matters for this application because you want to be able to read text on the charts as early as possible, zoom-wise. So it's seeming like the simple average-of-four-points algorithm the original program uses gives better results than opengl + my GPU, in terms of sharpness.
I know there are several glTexParameter settings to control some aspects of how mipmaps work. I've tried various combinations of GL_TEXTURE_MAX_LEVEL (anywhere from 0 to 10) with various settings for GL_TEXTURE_MIN_FILTER. When I set GL_TEXTURE_MAX_LEVEL to 0 (no mipmaps), I certainly get "sharp" results, but they are too sharp, in the sense that pixels just get dropped here and there, so the numbers are unreadable at intermediate zooms. When I set GL_TEXTURE_MAX_LEVEL to a higher value, the image looks quite good when you are zoomed far out (e.g., when the whole chart fits on the screen), but as you zoom in to intermediate zooms, you notice the blurriness especially when looking at text on the charts. (I.e., if it weren't for the text you might think "wow, opengl is doing a nice job of smoothly scaling my image." but with the text you think "why is this chart out of focus?")
My understanding is that basically you tell opengl to generate mipmaps, and then as you zoom in it picks the appropriate mipmaps to use, and there are some limited options for interpolating between the two closest mipmap levels, and either using the closest pixels or averaging the nearby pixels. However, as I say, none of these combinations seem to give quite as clear results, at the same zoom level on the chart (i.e., a zoom level where text is small but not minuscule, like the equivalent of "7 point" or "8 point" size), as the previous tile-based version.
My conclusion is that the mipmaps that opengl creates are simply blurrier than the ones the previous program created with the average-four-point algorithm, and no amount of choosing the right mipmap or LINEAR vs NEAREST is going to get the sharpness I need.
Specific questions:
(1) Does it seem right that opengl is in fact making blurrier mipmaps than the average-four-points algorithm from the original program?
(2) Is there something I might have overlooked in my use of glTexParameter that could give sharper results using the mipmaps opengl is making?
(3) Is there some way I can get opengl to make sharper mipmaps in the first place, such as by using a "cubic" filter or otherwise controlling the mipmap creation process? Or for that matter it seems like I could use the same average-four-points code to manually generate the mipmaps and hand them off to opengl. But I don't know how to do that...
(1) it seems unlikely; I'd expect it just to use a box filter, which is average four points in effect. Possibly it's just switching from one texture to a higher resolution one at a different moment — e.g. it "Chooses the mipmap that most closely matches the size of the pixel being textured", so a 256x256 map will be used to texture a 383x383 area, whereas the manual system it replaces may always have scaled down from 512x512 until the target size was 256x256 or less.
(2) not that I'm aware of in base GL, but if you were to switch to GLSL and the programmable pipeline then you could use the 'bias' parameter to texture2D if the problem is that the lower resolution map is being used when you don't want it to be. Similarly, the GL_EXT_texture_lod_bias extension can do the same in the fixed pipeline. It's an NVidia extension from a decade ago and is something all programmable cards could do, so it's reasonably likely you'll have it.
(EDIT: reading the extension more thoroughly, texture bias migrated into the core spec of OpenGL in version 1.4; clearly my man pages are very out of date. Checking the 1.4 spec, page 279, you can supply a GL_TEXTURE_LOD_BIAS)
(3) yes — if you disable GL_GENERATE_MIPMAP then you can use glTexImage2D to supply whatever image you like for every level of scale, that being what the 'level' parameter dictates. So you can supply completely unrelated mip maps if you want.
To answer your specific points, the four-point filtering you mention is equivalent to box-filtering. This is less blurry than higher-order filters, but can result in aliasing patterns. One of the best filters is the Lanczos filter. I suggest you calculate all of your mipmap levels from the base texture using a Lanczos filter and crank up the anisotropic filtering settings on your graphics card.
I assume that the original code managed textures itself because it was designed to view data sets that are too large to fit into graphics memory. This was probably a bigger problem in the past, but is still a concern.
Currently I am looking to write a text editor for linux systems that does some particular text/font highlighting that involves opengl rendering. Does anyone have suggestions for a c++ graphics rendering library that works well with linux (ubuntu in particular for now)?
And advice for where to start with rendering 3d text is greatly appreciated!
EDIT: Just to clarify rendering 3d text is a strict requirement of the project.
There are basically only three ways to do this at the OpenGL level:
Raster Fonts.
Use glBitmap or glDrawPixels to draw a rectangular bunch of pixels onto the screen. The disadvantages of doing this are many:
The data describing each character is sent from your CPU to the graphics card every frame - and for every character in the frame. This can amount to significant bandwidth.
The underlying OpenGL implementation will almost certainly have to 'swizzle' the image data in some manner on it's way between CPU and frame-buffer.
Many 3D graphics chips are not designed to draw bitmaps at all. In this case, the OpenGL software driver must wait until the 3D hardware has completely finished drawing before it can get in to splat the pixels directly into the frame buffer. Until the software has finished doing that, the hardware is sitting idle.
Bitmaps and Drawpixels have to be aligned parallel to the edges of the screen, so rotated text is not possible.
Scaling of Bitmaps and Drawpixels is not possible.
There is one significant advantage to Raster fonts - and that is that on Software-only OpenGL implementations, they are likely to be FASTER than the other approaches...the reverse of the situation on 3D hardware.
Geometric Fonts.
Draw the characters of the font using geometric primitives - lines, triangles, whatever. The disadvantages of this are:
The number of triangles it takes to draw some characters can be very large - especially if you want them to look good. This can be bad for performance.
Designing fonts is both difficult and costly.
Doing fonts with coloured borders, drop-shadows, etc exacerbates the other two problems significantly.
The advantages are:
Geometric fonts can be scaled, rotated, twisted, morphed, extruded.
You can use fancy lighting models, environment mapping, texturing, etc.
If used in a 3D world, you can perform collision detection with them.
Geometric fonts scale nicely. They don't exhibit bad aliasing artifacts and they don't get 'fuzzy' as they are enlarged.
Texture-Mapped Fonts.
Typically, the entire font is stored in one or two large texture maps and each letter is drawn as a single quadrilateral. The disadvantages are:
The size of the texture map you need may have to be quite large - especially if you need both upper and lower case - and/or if you want to make the font look nice at large point sizes. This is especially a problem on hardware that only supports limited texture map sizes (eg 3Dfx Voodoo's can only render maps up to 256x256)
If you use MIPmapping, then scaling the font makes it look a litte fuzzy. If you don't use MIPmapping, it'll look horribly aliasy.
The advantages are:
Generality - you can use an arbitary full colour image for each letter of the font.
Texture fonts can be rotated and scaled - although they always look 'flat'.
It's easy to convert other kinds of fonts into texture maps.
You can draw them in the 3D scene and they will be illuminated correctly.
SPEED! Textured fonts require just one quadrilateral to be sent to the hardware for each letter. That's probably an order of magnitude faster than either Raster or Geometric fonts. Since low-end 3D hardware is highly optimised to drawing simple textured polygons, speed is also enhanced because you are 'on the fast path' through the renderer. (CAVEAT: On software-only OpenGL's, textured fonts will be S-L-O-W.
Links to some Free Font Libraries:
glut
glTexFont
fnt
GLTT
freetype
Freetype: http://freetype.sourceforge.net/index2.html
And: http://oglft.sourceforge.net/
I use FTGL, which builds on top of freetype. To create 3D, extruded text, I make these calls:
#include <FTGL/ftgl.h>
#include <FTGL/FTFont.h>
...
FTFont* font = new FTExtrudeFont("path_to_Fonts/COOPBL.ttf");
font->Depth(.5); // Text is half as 'deep' as it is tall
font->FaceSize(1); // GL unit sized text
...
FTBBox bounds = font->BBox("Text");
glEnable(GL_NORMALIZE); // Because we're scaling
glPushMatrix();
glScaled(.02,.02,.02);
glTranslated(-(bounds.Upper().X() - bounds.Lower().X())/2.0,yy,zz); // Center the text
font->Render("Text");
glPopMatrix();
glDisable(GL_NORMALIZE);
I recomend you QT wich is foundation of KDE or GTk+ for GNOME. Both of them have support for OPENGL and text. With QT you can do advanced graphics(QGraphicsView) , including animation... Take a look at QT Demo Application .
A good start would be NeHe's OpenGL Lesson 14.
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=14
I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL.
I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush.
More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on.
Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"?
I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled.
Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn?
Thanks in advance for any help.
If I understood correctly, you're looking for glPolygonStipple() or glLineStipple().
PolygonStipple is very limited as it allows only 32x32 pattern but it should work like PatternBrush. I have no idea how to implement it in VB though.
First of all, are you sure it's the drawing operation itself that is the bottleneck here? Visual Basic is known for being very slow (Especially if your program is compiled to intermediary VM code - which is the default AFAIRC. Be sure you check the option to compile to native code!), and if it is your code that is the bottleneck, then OpenGL won't help you much - you'll need to rewrite your code in some other language - probably C or C++, but any .NET lang should also do.
OpenGL contains functions that allow you to draw stippled lines and polygons, but you shouldn't use them. They're deprecated for a long time, and got removed from OpenGL in version 3.1 of the spec. And that's for a reason - these functions don't map well to the modern rendering paradigm and are not supported by modern graphics hardware - meaning you will most likely get a slow software fallback if you use them.
The way to go is to use a small texture as a mask, and tile it over the drawn polygons. The texture will get stretched or compressed to match the texture coordinates you specify with the vertices. You have to set the wrapping mode to GL_REPEAT for both texture coordinates, and calculate the right coordinates for each vertex so that the texture appears at its original size, repeated the right amount of times.
You could also use the stencil buffer as you described, but... how would you fill that buffer with the pattern, and do it fast? You would need a texture anyway. Remember that you need to clear the stencil buffer every frame, before you start drawing. Not doing so could cost you a massive performance hit (the exact value of "massive" depending on the graphics hardware and driver version).
It's also possible to achieve the desired effect using a fragment shader, but learning shaders for that would be an overkill for an OpenGL beginner like yourself :-).
Ah, I think I've found it! I can make a stencil across the entire viewport in the shape of the pattern I want (or its mask, I guess), and then enable that stencil when I want to draw with that pattern.
You could just use a texture. Put the pattern in as in image and turn on texture repeating and you are good to go.
Figured this out a a year or two ago.