Load and use alpha channel bitmap - opengl

Considering the following alpha channel bitmap:
const int width = 4, height = 4;
var alpha = new byte[width * height] {
255, 255, 255, 255,
0, 127, 127, 0,
0, 127, 127, 0,
255, 255, 255, 255
};
I would like to load it into OpenGl and use it into fragment shader as:
out vec4 color;
void main() {
float alpha = // Get pixel alpha like texture()
color = vec4(1, 0, 0, alpha);
}
I know it is possible to use a RGBA Bitmap texture but is it possible to use only Alpha channel bitmap texture ? (from the simplest byte[] type)

You don't need to use an alpha channel bitmap, you can use a 1 channel bitmap texture as in:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, byteBuffer);
Here with GL_RED we are saying that we want only want a texture with one channel (red)
So in the fragment shader you can do
float alpha = texture(...).r;

Related

Register a texture array from OpenGL to CUDA

I want to register a texture array created with OpenGL to CUDA. For that I simply use the interoperability function cudaGraphicsGLRegisterImage (see CUDA documentation) :
void registerTextureInCUDA()
{
// _textureDepth = 2 here
GLenum target = _textureDepth < 2 ? GL_TEXTURE_2D : GL_TEXTURE_2D_ARRAY;
GLuint texture = 0;
GLsizei width = 2;
GLsizei height = 2;
GLsizei layerCount = 2;
GLsizei mipLevelCount = 1;
// Read you texels here. In the current example, we have 2*2*2 = 8 texels, with each texel being 4 GLubytes.
GLubyte texels[32] =
{
// Texels for first image.
0, 0, 0, 255,
255, 0, 0, 255,
0, 255, 0, 255,
0, 0, 255, 255,
// Texels for second image.
255, 255, 255, 255,
255, 255, 0, 255,
0, 255, 255, 255,
255, 0, 255, 255,
};
glGenTextures(1,&texture);
glBindTexture(target,texture);
// No error after this call
GL_CHECK();
CUDA_CHECK(cudaGraphicsGLRegisterImage(&_pGraphicsResource, texture, target, cudaGraphicsRegisterFlagsWriteDiscard));
}
For simple GL_TEXTURE_2D I have no error and I can write into the texture normally, but with GL_TEXTURE_2D_ARRAY I have the following error :
Cuda error: 1 invalid argument
In CUDA documentation this type of return does not seem to be expected ?
What argument could be in cause here ?
I found the problem. I didn't allocate the storage for the texture, I just need to add this line before registering the texture in cuda :
glTexStorage3D(target, mipLevelCount, GL_RGBA8, width, height, layerCount);

How to load a sf::Image from an array of pixels in SFML?

I would like to load an image in sfml from a 2d array containing 3 values for each pixel (RGB). The array would look something like this:
{
{{255, 255, 255}, {255, 255, 255}},
{{255, 255, 255}, {255, 255, 255}}
}
The array above describes a 2x2 image which is white. How can I turn this into an Image in sfml (sf::Image)?
If you want to create an sf::Image object from a pixel array, then you are interested in the sf::Image::create() member function overload that takes a const Uint8 *:
void sf::Image::create(unsigned int width, unsigned int height, const Uint8 * pixels);
As the name suggests, the last parameter, pixels, corresponds to the array of pixels you want to create the sf::Image from. Note that this pixel array is assumed to be in the RGBA format (this contrasts with the RGB format suggested in the code of the question). That is, the array must hold four Uint8s for each pixel – i.e., a Uint8 for each component: red, green, blue and alpha.
As example, consider the following pixel array, pixels, made up of six pixels:
const unsigned numPixels = 6;
sf::Uint8 pixels[4 * numPixels] = {
0, 0, 0, 255, // black
255, 0, 0, 255, // red
0, 255, 0, 255, // green
0, 0, 255, 255, // blue
255, 255, 255, 255, // white
128, 128, 128, 255, // gray
};
Then, we can create an sf::Image object from the pixels array of pixels:
sf::Image image;
image.create(3, 2, pixels);
The pixels of the sf::Image created above will correspond to these:
This is a 3x2-pixel image, However, flipping the image's width and height arguments passed to the sf::Image::create() as done in:
sf::Image image;
image.create(2, 3, pixels);
This results in a 2x3-pixel image instead:
Note, however, that both sf::Image objects above are created from the same array of pixels, pixels, and they both are made up of six pixels – the pixels are just arranged differently because the images have different dimensions. Nevertheless, the pixels are the same: a black, a red, a green, a blue, a white and a gray pixel.

Is GDI+ with opengl possible?

I tried to make gdiplus and opengl works together in one offscreen hdc. Current code is something like this (no Internet access)
create a gdiplus bitmap A
create a gdiplus graphics from A as B
get B's HDC
choose a proper pixelformat for HDC
create a gdiplus graphics from HDC as C
create an opengl context from HDC as D
drawing with C and D
release HDC to B
draw A to other graphics
C is needed because I found that after I release the HDC to B, it is not possible for opengl to change the bitmap.
To check if things work, I added a bitmap E by reading gl pixels before (9)
There are still a problem in current program
After releasing the hdc, the bitmap A losses its alpha information.
The bitmap E works fine with correct alpha, but the GL_BGRA cannot read out alpha so I have to read GL_RGBA data and do a per pixel convertion into gdiplus color format.
Should I just use E or is there any other attempt?
Example of releasehdc alpha lossing:
Bitmap bitmap(100,100);
Graphics graphics(&bitmap);
HDC hdc=graphics.GetHDC();
Graphics graphics2(hdc);
graphics2.Clear(Color(128,255,0,0));
graphics2.Flush();
graphics.ReleaseHDC(hdc);
//draw bitmap to other graphics
Note: I just figure out that gdiplus didn't actually use the alpha channel in the HDC, only the rgb channels are shared, so I wonder how it works with a BGRA target bitmap
I think the best way to use OpenGL and GDI+ together is to generate a texture in GDI+, then load that into OpenGL.
The gist of this is:
void MakeTexture(GLuint& texId)
{
Bitmap offscreen(512, 512, PixelFormat32bppARGB);
Graphics gr(&offscreen);
gr.Clear(Color(128, 255, 0, 0));
Gdiplus::SolidBrush brush(Color(255, 0, 0, 255));
Pen pen(Color(128, 255, 0, 0), 16.f);
Font font(L"Arial", 48.f);
Rect r(25, 25, 100, 100);
gr.DrawRectangle(&pen, r);
gr.DrawString(TEXT("TEST STRING"), -1, &font, PointF(50, 50), &brush);
vector<DWORD> argb;
GetBitsLockBits(offscreen, argb, 1);
genTexture(texId, offscreen.GetWidth(), offscreen.GetHeight(), argb);
}
void GetBitsLockBits(Bitmap& bmp, vector<DWORD>& argb, bool invert = 0)
{
BitmapData bmpData;
RectF rectf;
Unit unit;
bmp.GetBounds(&rectf, &unit);
Rect rect(rectf.X, rectf.Y, rectf.Width, rectf.Height);
printf("Got rect %d %d %d %d\n", rect.X, rect.Y, rect.Width, rect.Height);
bmp.LockBits(&rect, ImageLockModeRead, PixelFormat32bppARGB, &bmpData);
printf("BMP has w=%d h=%d stride=%d\n", bmpData.Width, bmpData.Height, bmpData.Stride);
argb.resize(bmpData.Width * bmpData.Height);
if (invert)
for (int i = 0; i < bmpData.Height; i++)
memcpy(&argb[i * bmpData.Width], (GLuint*)bmpData.Scan0 + (bmpData.Height - 1 - i) * bmpData.Width, bmpData.Width * 4);
else if (bmpData.Stride == bmpData.Width * 4)
memcpy(&argb[0], bmpData.Scan0, bmpData.Width * bmpData.Height * 4); // If the bmp is padded then
// this won't read the image correctly (it will read it with pad bits between)
else
for (int i = 0; i < bmpData.Height; i++)
memcpy(&argb[i * bmpData.Width], (GLuint*)bmpData.Scan0 + i * bmpData.Width, bmpData.Width * 4);
bmp.UnlockBits(&bmpData);
}
void genTexture(GLuint& texId, int w, int h, const vector<DWORD>& argb)
{
glGenTextures(1, &texId); CHECK_GL;
glBindTexture(GL_TEXTURE_2D, texId); CHECK_GL;
glPixelStorei(GL_UNPACK_ALIGNMENT, 4); CHECK_GL;
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); CHECK_GL;
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); CHECK_GL;
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, w, h, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, &argb[0]); CHECK_GL;
}
Trying to have OpenGL and GDI+ cooperate by drawing into the same window directly will probably get you very flickery results.

Create a 1x1 texture in SDL 2.0

In C# and XNA, you can create a 1x1 texture like this:
Texture2D white_pixel;
white_pixel = new Texture2D(GraphicsDevice, 1, 1);
white_pixel.SetData<Color[]>(new Color{ Color.White });
// Sorry if I got the syntax wrong, it's been a while
Then later on, you can arbitrarily draw the pixel to any size and color by doing this:
spriteBatch.Begin();
spriteBatch.Draw(white_pixel, new Rectangle(0, 0, width, height), Color.Whatever);
spriteBatch.End();
What is the equivalent in SDL?
SDL_Texture *tex = nullptr;
SDL_CreateTexture(renderer,
Uint32 format, // What do I put here
int access, // and here
1
1);
// Not sure if this is correct
SDL_SetTextureColorMod(tex,
255,
255,
255)
SDL_Rect rect;
rect.x = 0;
rect.y = 0;
rect.w = 10;
rect.h = 10;
SDL_RenderCopy(renderer, tex, nullptr, &rect);
SDL_PIXELFORMAT_RGB24/SDL_PIXELFORMAT_BGR24 for format and SDL_TEXTUREACCESS_STATIC for access would be a good start.
Or you could just draw a colored rectangle directly via SDL_SetRenderDrawColor() and SDL_RenderFillRect().

Slight undesired transparency from FillRectangle

I have a window created with the WS_EX_LAYERED window style. I am currently drawing onto a memory bitmap using GDI+, and using UpdateLayeredWindow to update the graphical content of my layered window.
Here's a snippet of my code:
void Redraw(HWND hWnd, int width, int height) {
static bool floppy = true;
floppy = !floppy;
HDC hScreenDC = GetDC(HWND_DESKTOP);
HDC hMemDC = CreateCompatibleDC(hScreenDC);
HBITMAP hBmp = CreateCompatibleBitmap(hScreenDC, width, height);
HGDIOBJ hObj = SelectObject(hMemDC, hBmp);
Graphics gfx(hMemDC);
SolidBrush b(Color(254, (floppy ? 255 : 0), (floppy ? 0 : 255), 0));
gfx.FillRectangle(&b, Rect(0, 0, width, height));
BLENDFUNCTION blend;
blend.BlendOp = AC_SRC_OVER;
blend.BlendFlags = 0;
blend.SourceConstantAlpha = 255;
blend.AlphaFormat = AC_SRC_ALPHA;
POINT src = { 0, 0 };
SIZE size;
size.cx = width;
size.cy = height;
Assert(UpdateLayeredWindow(
hWnd,
hScreenDC,
NULL,
&size,
hMemDC,
&src,
RGB(0, 0, 0),
&blend,
ULW_ALPHA
));
SelectObject(hMemDC, hObj);
DeleteObject(hBmp);
DeleteDC(hMemDC);
ReleaseDC(HWND_DESKTOP, hScreenDC);
}
When creating my SolidBrush, I specified the value of 254 for the alpha component. This results in a 99.6% opaque fill, which is not what I want.
When I specify 255 as the alpha component, there appears to be no fill; my window becomes completely transparent. This is an issue because I wish to draw shapes that are 100% opaque, but I also wish to draw some that aren't.
There seems to be some qwerks with FillRectangle. This becomes apparent when we observe that using FillEllipse with a SolidBrush whose alpha component is 255, results in the shape being rendered perfectly (opaque).
Here are two work-arounds that I came up with, which each solve the issue for me:
Call FillRectangle twice
SolidBrush b(Color(254, 255, 0, 0));
gfx.FillRectangle(&b, Rect(0, 0, width, height));
gfx.FillRectangle(&b, Rect(0, 0, width, height));
Since the same area is being filled twice, they will blend and create RGB(255, 0, 0) regardless of the content behind the window (it's now 100% opaque). I do not prefer this method, as it requires every rectangle to be drawn twice.
Use FillPolygon instead
Just as with FillEllipse, FillPolygon doesn't seem to have the colour issue, unless you call it like so:
SolidBrush b(Color(255, 255, 0, 0));
Point points[4];
points[0] = Point(0, 0);
points[1] = Point(width, 0);
points[2] = Point(width, height);
points[4] = Point(0, height);
gfx.FillPolygon(&b, points, 4); //don't copy and paste - this won't work
The above code will result in a 100% transparent window. I am guessing that this is either due to some form of optimisation that passes the call to FillRectangle instead. Or - most likely - there is some problem with FillPolygon, which is called by FillRectangle. However, if you add an extra Point to the array, you can get around it:
SolidBrush b(Color(255, 255, 0, 0));
Point points[5];
points[0] = Point(0, 0);
points[1] = Point(0, 0); //<-
points[2] = Point(width, 0);
points[3] = Point(width, height);
points[4] = Point(0, height);
gfx.FillPolygon(&b, points, 5);
The above code will indeed draw a 100% opaque shape, which fixes my problem.
UpdateLayeredWindow() requires a bitmap with pre-multiplied alpha:
Note that the APIs use premultiplied alpha, which means that the red,
green and blue channel values in the bitmap must be premultiplied with
the alpha channel value. For example, if the alpha channel value is x,
the red, green and blue channels must be multiplied by x and divided
by 0xff prior to the call.
You can use Bitmap::ConvertFormat() to convert a bitmap to pre-multiplied (the format is PixelFormat32bppPARGB).