So I've come to c++ Glew and Glut from java LWJGL. And I've got a spinning rectangle with simple glBegin(GL_QUADS) and that stuff working. But how do i activate pixel format like in LWJGL.
Best you can do is glutInitDisplayMode() on/off flags:
GLUT_RGBA: Bit mask to select an RGBA mode window. This is the default if neither GLUT_RGBA nor GLUT_INDEX are specified.
GLUT_RGB: An alias for GLUT_RGBA.
GLUT_INDEX: Bit mask to select a color index mode window. This overrides GLUT_RGBA if it is also specified.
GLUT_SINGLE: Bit mask to select a single buffered window. This is the default if neither GLUT_DOUBLE or GLUT_SINGLE are specified.
GLUT_DOUBLE: Bit mask to select a double buffered window. This overrides GLUT_SINGLE if it is also specified.
GLUT_ACCUM: Bit mask to select a window with an accumulation buffer.
GLUT_ALPHA: Bit mask to select a window with an alpha component to the color buffer(s).
GLUT_DEPTH: Bit mask to select a window with a depth buffer.
GLUT_STENCIL: Bit mask to select a window with a stencil buffer.
GLUT_MULTISAMPLE: Bit mask to select a window with multisampling support. If multisampling is not available, a non-multisampling window
will automatically be chosen. Note: both the OpenGL client-side and
server-side implementations must support the GLX_SAMPLE_SGIS extension
for multisampling to be available.
GLUT_STEREO: Bit mask to select a stereo window.
GLUT_LUMINANCE: Bit mask to select a window with a ``luminance'' color model. This model provides the functionality of OpenGL's RGBA
color model, but the green and blue components are not maintained in
the frame buffer. Instead each pixel's red component is converted to
an index between zero and glutGet(GLUT_WINDOW_COLORMAP_SIZE)-1 and
looked up in a per-window color map to determine the color of pixels
within the window. The initial colormap of GLUT_LUMINANCE windows is
initialized to be a linear gray ramp, but can be modified with GLUT's
colormap routines.
You can't request a specific number of alpha/depth/stencil/etc. bits like you can with LWJGL's PixelFormat.
Related
I have a directX11 texture which is of ARGB format. (different pixels has different alpha value, like the one below)
I need to render that texture on a transparent Window which means the desktop should appear behind the texture.
I am using SetLayeredWindowAttributes which could make the window transparent but it's boolean, i.e. a pixel appears either fully transparent or doesn't appear. I need to achieve per-pixel transparency level - where darkness is defined by the alpha value of the pixel (Something like AlphaBlend). How to achieve it?
Use UpdateLayeredWindow instead. Select a 32-bit ARGB bitmap into the source HDC.
A more fancy solution is WS_EX_NOREDIRECTIONBITMAP and ICompositorDesktopInterop but this is probably overkill in this case unless you need to do updates often. MSDN magazine did have a few articles about this. DirectComposition is intended to interop with Direct2D etc. where as UpdateLayeredWindow is much older and predates the DWM and any kind of visual tree rendering.
I use kivy in my application and try to create a transparent background window. I do this with:
Window.clearcolor = (1,1,1,0)
Window.clear()
That produces a white window - opaque.
Kivy directly calls glClearColor from the OpenGL 4 API (https://www.khronos.org/opengl/).
The docs say, that the last parameter is the alpha channel so I expect my window to be transparent.
Do I have a mistake in my thinking or is this a bug?
Default pixel formats are often RGB, so the alpha value is only used for blending operations. You need the correct pixel format to make your surfaces transparent, see this answer.
I'm beating my way through a 1.2->2.0 conversion, one problem at a time. So far I have sound, interaction and a screen that shows... well, something.
I'm sure the problem is due to bit depth and/or formats. The original code used 8-bit indexed SPR files for the sprites, loaded them into a series of uint8 *buffu, and then blitted them to the display's Surface.
I have ported this, following the guide and significant trial-and-error (lots of modes and switches simply don't work on my machine), by creating a Texture and a Surface, letting the old code blit into the Surface, and then do this...
SDL_UpdateTexture(sdltxtr, NULL, sdlsurf->pixels, 640 * sizeof (uint8));
SDL_RenderClear(sdlrend);
SDL_RenderCopy(sdlrend, sdltxtr, NULL, NULL);
SDL_RenderPresent(sdlrend);
The result is a screen with stuff, but it's all misaligned. I assume that is because the Surface and Texture have different bit depths and formats than the sprites...
sdltxtr = SDL_CreateTexture(sdlrend,
SDL_PIXELFORMAT_ARGB8888,
SDL_TEXTUREACCESS_STREAMING,
640, 480);
sdlsurf = SDL_CreateRGBSurface(0, 640, 480, 8, 0,0,0,0);
I've tried various settings from the documentation to try to get a surface or texture that's 8-bit indexed, but all of the flags cause the Surface or Texture to be empty.
Any suggestions?
You mention indexed 8-bit graphics. Assuming that indexed means palettized, you can't simply use the pixel buffer, since this is just a list of entries to the associated color palette of the SDL_Surface.
That being said, you need to create a buffer that actually holds the correct ARGB values from the palette, but not the index values associated with the pixel buffer.
You could use SDL_ConvertSurfaceFormat() to convert your 8-bit palettized surface to a 32-bit ARGB surface and place its buffer into the SDL_Texture. You also can create an own 32-bit ARGB buffer and do the conversion yourself by looking up the correct palette codes (the first option will be easier in most cases, though).
Before you are converting the 8-bit SDL_Surface to a 32-bit one, you should associate a valid palette with it (SDL_SurfaceSetPalette()).
(Using WinApi) Is there a way to:
Make transparent pixels?
Somehow instead of using transparency just have the image dynamically get the background colors and textures, and fill certain Colors with those textures, For Example: If I had a video game sprite and the background color of it was white, could I somehow get those white pixels and fill them with the background colors/textures?
If you create a 32-bit bitmap, 24 bits of each pixel are used for RGB values and the extra 8 bits are used for an alpha channel. Just set the alpha to 0 for full transparency.
When creating a bitmap that uses 24-bit or smaller pixels, the transparent color is usually indicated by the pixel in the lower-left corner of the bitmap.
Either way, creating a transparent bitmap is only half the equation. Creating a transparent bitmap itself is easy, but you then have to render the bitmap in a transparent manner. The Win32 API has TransparentBlt() and AlphaBlend() functions for that purpose, and there are plenty of online turorials and blogs that explain how to use them.
Is there a standard way, using only OpenGL functions, to obtain the size of the backbuffer, in pixels? The closest I've found is querying the size of the viewport, but I doubt it always matches the backbuffer size. I'm looking for the maximum width and height values I can supply to glReadPixels, for example.
The backbuffer is part of the window framebuffer planes set (front color, back color, depth, stencil, etc. depending on what's been configured). Since they all belong together they all have the same dimensions. Since the standard framebuffer is tied to the visible window the accessible part is determined by the window dimensions.
However there's one important thing to be aware of: A window's framebuffer is (or used to be) just a slice of the screen framebuffer. If the window is moved or resized only the offset and the stride of the slice in the screen framebuffer changes. It also means that if the window is (partially) obstructed by another window, the occluded parts of the window don't contribute to the framebuffer. Reading from a framebuffer partially covered by other windows you may end up with either undefined contents there, or the contents of the occluding window.
On modern operating systems windows are rendered into off screen areas which can not be occluded, to support compositing window management.