SDL - Dynamic Alpha? - c++

I plan on making a game (in SDL) where, if one character moves, the part of the image it was on turns alpha, thus allowing me to place a scrolling image underneath the original scene.
1) Is this possible?
2) If yes to #1, how can I go about implementing this (not to give me code, but to guide me in the right direction).

It sounds like you want to learn about image compositing.
A typical game these days will have a redraw function somewhere to redraw the entire screen. The entire scene is always redrawn each frame.
void redraw()
{
drawBackground();
drawCharacters();
drawHUD();
swapBuffers();
}
This is as simple as it gets: by using the right blending modes, each time you draw something it appears on top of what was drawn before. Older games are much more complicated because they don't redraw the entire screen at a time (or don't use a framebuffer), and newer games are much more complicated because they draw the world front-to-back and back-to-front in multiple passes for different types of objects.
SDL has software image compositing functions which you can use, or you can use OpenGL (which may use a combination of software and hardware). I personally use OpenGL because it is more powerful (lets you draw more complicated scenes), but the SDL compositing functions are easier to use. There are many excellent tutorials and many more mediocre or terrible tutorials online.
I'm not sure what you mean when you say "the part of the image it was on turns alpha". The alpha channel does not appear on screen, you cannot see it, it just affects how two images are composited.

Related

C++ Zooming Graphical Content

I'm trying to make a program that handles graphics and I am not quite sure how to implement zooming. I have done a zooming effect with primitive shapes such as lines and circles (with SDL_gfxPrimitives) by scaling them down but that wont work for a picture. How would I implement zooming?
There is a SDL library that supports zooming:
SDL2_gfx Library
The SDL_gfx library evolved out of the SDL_gfxPrimitives code which
provided basic drawing routines such as lines, circles or polygons and
SDL_rotozoom which implemented a interpolating rotozoomer for SDL
surfaces.
The current components of the SDL_gfx library are:
Graphic Primitives (SDL_gfxPrimitves.h)
Rotozoomer (SDL_rotozoom.h)
Framerate control (SDL_framerate.h)
MMX image filters (SDL_imageFilter.h)
Custom Blit functions (SDL_gfxBlitFunc.h)
Your question is not specific enough to produce a specific answer that is likely to get you what you appear to be looking for.
What I can offer you is the suggestion that you first come up with a way to represent zooming.
If you already know how to draw a picture, consider the fact that when it comes to computer graphics, it is almost always the case that "zooming in" or "zooming out" is nothing more than drawing your picture at a progressively larger or smaller size.
With that in mind, maybe you will begin to see that a reasonable way to represent the concept of zooming is with some form of Camera class that will unambiguously determine the size and location of the pictures you draw.

How do I output 3D images to my 3D TV?

I have a 3D TV and feel that I would be shirking my responsibilities (as a geek) if I didn't at least try to make it display pretty 3D images of my own creation!
I've done a very basic amount of OpenGL programming before and so I understand the concepts involved - Assume that I can render myself a simple Tetrahedron or Cube and make it spin around a bit; How can I get my 3D TV to display this image in, well, 3D?
Note that I understand the basics of how 3D works (render the same image twice from 2 different angles, one for each eye), my question is about the logistics of actually doing this (do I need an SDK? etc...)
The TV I have uses polarization 3D, although my intention is that this question also be relevant to other 3D technologies (if possible)
My laptop has a HDMI output, which is what I intend to use to connect up to my TV with (does this make any difference over using a VGA / component video cable?)
In the past I have experimented with GLUT / OpenGL, however if its easier / only really possible to do this using some alternative technology then thats fine
The main problem is, getting your GPU to send a stereoscopic format. In the case of a HDMI connection this will not work without the help of a driver. If you have a professional grade GPU (Quadro, FireGL), then they likely support OpenGL quadbuffers, i.e. you get framebuffers for the left and right eye, both back and front:
glDrawBuffer(GL_BACK_LEFT);
render_left_eye();
glDrawBuffer(GL_BACK_RIGHT);
render_right_eye();
glDrawBuffer(GL_BACK); // renders to both eyes simultanously
render_screen_level_and_nonstereoscopic();
SwapBuffers();
Unfortunately OpenGL quad buffer is considered professional grade stuff.
Instead NVidia (at least) provides a customary stereoscopy library plus some extensions to control it. The main reasoning is, that shared fragments are to be rendered only once and then sent to both eyes with the appropirate parallax applied. However from my semi-professional experiences with stereoscopy¹, these kinds of semi-/automatic stereoscopifications just don't cut it. Stereoscopy requires tight control of the whole "production" pipeline, otherwise you're screwed. With Elephants Dream I went as far as modifying the renderer's core code.
I sent the people at the 3D devision at NVidia some case scenarios where you need exact control over the stereoscopy process, and I hope they will see the light and give access to quad buffer stereo also on consumer grade hardware.
Note that I understand the basics of how 3D works (render the same image twice from 2 different angles, one for each eye)
Actually you don't render from two different angles but with a shifted parallax and lens shift. Otherwise you get some trapezoidal/keystone distortion in the horizontal, which are very, very unpleasant to watch (in fact I now think that in the stereoscopic rendering process one should slightly diverge the optical axes – i.e. doing the complete contrary to what one would naively do – and "over"compensate with lens shift, I'm currently preparing a small study about this, but still need to gather my testing and control groups).
1: heck, I'm the guy who single-handedly stereographed Elephants Dream, rendered it and got it an award at a 3D movie festival.
Because you have a passive 3D TV, it's likely that the left and right eye views are rendered on alternate scan lines. (or perhaps on alternate pixels in a checkerboard pattern)
Thus your mission is to render the left-eye view to the even numbered scan lines, and the right eye view to the odd numbered scan lines (or vice versa). This can be accomplished either via OpenGL stencil operations, or, more modernly, using custom fragment shaders.
This way, you can avoid the whole quad-buffered video card/GL_BACK_LEFT/GL_BACK_RIGHT approach described by datenwolf. And you want to avoid that approach, as I have never encountered a video driver that directs quad-buffered stereo 3D to an actual 3D TV.
I agree with datenwolf's advice that you should use asymmetric frustum shift rather than scene rotation to generate the right and left eye viewpoints.

Anti-aliasing in OpenGL

I just started with OpenGL programming and I am building a clock application. I want it to look something simple like this: http://i.stack.imgur.com/E73ap.jpg
However, my application looks very "un-anti-aliased" : http://i.stack.imgur.com/LUx2v.png
I tried the GL_SMOOTH_POLYGON method mentioned in the Red Book. However that doesn't seem to do a thing.
I am working on a laptop with Intel integrated graphics. The card doesn't support things like GL_ARB_multisample.
What are my options at this point to my app look anti-aliased?
Intel integrated videocards are notorious for their lack of support for OpenGL antialiasing. You can work around that, however.
First option: Manual supersampling
Make a texture 2x times as big as the screen. Render your scene to the texture via FBO, then render the texture at half size so it fills the screen, with bilinear interpolation. Can be very slow (in complex scenes) due to the 4x increase in pixels to draw.
Will result in weak antialiasing (so I don't recommend it for desktop software like your clock). See for yourself:
Second option: (advanced)
Use a shader to perform Morphological Antialiasing. This is a new technique and I don't know how easy it is to implement. It's used by some advanced games.
Third option:
Use textures and bilinear interpolation to your advantage by emulating OpenGL's primitives via textures. The technique is described here.
Fourth option:
Use a separate texture for every element of your clock.
For example, for your hour-arrow, don't use a flat black GL_POLYGON shaped like your arrow. Instead, use a rotated GL_QUAD, textured with a hour-arrow image drawn in an image program. Then bilinear interpolation will take care of antialiasing it as you rotate it.
This option would take the least effort and looks very well.
Fifth option:
Use a library that supports software rendering -
Qt
Cairo
Windows GDI+
WPF
XRender
etc
Such libraries contain their own algorithms for antialiased rendering, so they don't depend on your videocard for antialiasing. The advantages are:
Will render the same on every platform. (this is not guaranteed with OpenGL in various cases - for example, the thick diagonal "tick" lines in your screenshot are rendered as parallelograms, rather than rectangles)
Has a big bunch of convenient drawing functions ("drawArc", "drawText", "drawConcavePolygon", and those will support gradients and borders. also you get things like an Image class.)
Some, like Qt, will provide much more desktop-app type functionality. This can be very useful even for a clock app. For example:
in an OpenGL app you'd probably loop every 20msec and re-render the clock, and not even think twice. This would hog unnecessary CPU cycles, and wake up the CPU on a laptop, depleting the battery. By contrast, Qt is very intelligent about when it must redraw parts of your clock (e.g., when the right half of the clock stops being covered by a window, or when your clock moves the minute-arrow one step).
once you get to implementing, e.g. a tray icon, or a settings dialog, for your clock, a library like Qt can make it a snap. It's nice to use the same library for everything.
The disadvantage is much worse performance, but that doesn't matter at all for a clock app, and it turns around when you take into account the intelligent-redrawing functionality I mentioned.
For something like a clock app, the fifth option is very much recommended. OpenGL is mainly useful for games, 3D software and intense graphical stuff like music visualizers. For desktop apps, it's too low-level and the implementations differ too much.
Draw it into a framebuffer object at twice (or more) the final resolution and then use that image as a texture for a single quad drawn in the actual window.

OpenGL equivalent of GDI's HatchBrush or PatternBrush?

I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL.
I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush.
More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on.
Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"?
I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled.
Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn?
Thanks in advance for any help.
If I understood correctly, you're looking for glPolygonStipple() or glLineStipple().
PolygonStipple is very limited as it allows only 32x32 pattern but it should work like PatternBrush. I have no idea how to implement it in VB though.
First of all, are you sure it's the drawing operation itself that is the bottleneck here? Visual Basic is known for being very slow (Especially if your program is compiled to intermediary VM code - which is the default AFAIRC. Be sure you check the option to compile to native code!), and if it is your code that is the bottleneck, then OpenGL won't help you much - you'll need to rewrite your code in some other language - probably C or C++, but any .NET lang should also do.
OpenGL contains functions that allow you to draw stippled lines and polygons, but you shouldn't use them. They're deprecated for a long time, and got removed from OpenGL in version 3.1 of the spec. And that's for a reason - these functions don't map well to the modern rendering paradigm and are not supported by modern graphics hardware - meaning you will most likely get a slow software fallback if you use them.
The way to go is to use a small texture as a mask, and tile it over the drawn polygons. The texture will get stretched or compressed to match the texture coordinates you specify with the vertices. You have to set the wrapping mode to GL_REPEAT for both texture coordinates, and calculate the right coordinates for each vertex so that the texture appears at its original size, repeated the right amount of times.
You could also use the stencil buffer as you described, but... how would you fill that buffer with the pattern, and do it fast? You would need a texture anyway. Remember that you need to clear the stencil buffer every frame, before you start drawing. Not doing so could cost you a massive performance hit (the exact value of "massive" depending on the graphics hardware and driver version).
It's also possible to achieve the desired effect using a fragment shader, but learning shaders for that would be an overkill for an OpenGL beginner like yourself :-).
Ah, I think I've found it! I can make a stencil across the entire viewport in the shape of the pattern I want (or its mask, I guess), and then enable that stencil when I want to draw with that pattern.
You could just use a texture. Put the pattern in as in image and turn on texture repeating and you are good to go.
Figured this out a a year or two ago.

Best way to render hand-drawn figures

I guess I'll illustrate with an example:
In this game you are able to draw 2D shapes using the mouse and what you draw is rendered to the screen in real-time. I want to know what the best ways are to render this type of drawing using hardware acceleration (OpenGL). I had two ideas:
Create a screen-size texture when drawing is started, update this when drawing, and blit this to the screen
Create a series of line segments to represent the drawing, and render these using either lines or thin polygons
Are there any other ideas? Which of these methods is likely to be best/most efficient/easiest? Any suggestions are welcome.
I love crayon physics (music gets me every time). Great game!
But back to the point... He has created brush sprites that follow your mouse position. He's created a few brushes that account for a little variation. Once the mouse goes down, I imagine he is adding these sprites to a data structure and sending that structure through his drawing and collision functions to loop through. Thus popping out the real-time effect. He is using Simple DirectMedia Layer library, which I give two thumbs up.
I'm pretty sure the second idea is the way to go.
First option if the player draws pure freehand (rather than lines), and what they draw doesn't need to be animated.
Second option if it is animated or is primarily lines. If you do choose this, it seems like you'd need to draw thin polygons rather than regular lines to get any kind of interesting look (as in the crayon example).