How do you update a UTextureRenderTarget2D dynamically in C++? - c++

I've been trying, but failing to do it with this code
UTextureRenderTarget2D *renderTarget; // In header
void UMyActorComponent::BeginPlay()
{
Super::BeginPlay();
texture = UTexture2D::CreateTransient(1, 1);
FTexture2DMipMap& Mip = texture->PlatformData->Mips[0];
void* Data = Mip.BulkData.Lock(LOCK_READ_WRITE);
uint8 red[4] = { 255, 255, 255, 255 };
FMemory::Memcpy(Data, &red, sizeof(red));
Mip.BulkData.Unlock();
texture->UpdateResource();
renderTarget->UpdateTexture2D(texture, TSF_BGRA8);
}
I at least know for a fact I have the correct handle to the renderTarget, because I've been able to clear the render target. I might be missing something simple any help is appreciated.

I found a solution and it seems like its the correct way to do it.
Disclaimer
Using a texture render target might not be the best approach since generally a render target conveys you’re rendering from the gpu to the gpu. And in this case I’m writing data from the cpu to the gpu. So you might run into some unforseen problems down the line if you take this approach. Probably creating a transient texture is the proper way to do this.
Solution
Dynamically initializing the Render Target's Resource
So first you'll need to initialize your UTextureRenderTarget2D to your desired resolution using this function. Put this where it makes sense.
RenderTarget->InitCustomFormat(2, 2, PF_B8G8R8A8, true);
I'm using a 2x2 image with a BGRA8 pixel format and linear color space for this example
Function to update the Render Target
Then you'll want to place this function somewhere accessible in your code
void UpdateTextureRegion(FTexture2DRHIRef TextureRHI, int32 MipIndex, uint32 NumRegions, FUpdateTextureRegion2D Region, uint32 SrcPitch, uint32 SrcBpp, uint8* SrcData, TFunction<void(uint8* SrcData)> DataCleanupFunc = [](uint8*) {})
{
ENQUEUE_RENDER_COMMAND(UpdateTextureRegionsData)(
[=](FRHICommandListImmediate& RHICmdList)
{
check(TextureRHI.IsValid());
RHIUpdateTexture2D(
TextureRHI,
MipIndex,
Region,
SrcPitch,
SrcData
+ Region.SrcY * SrcPitch
+ Region.SrcX * SrcBpp
);
DataCleanupFunc(SrcData);
});
}
Updating the Render Target
This code will write a red, green, blue, and white
static uint8 rgbw[4 * 4] = {
0, 0, 255, 255,
0, 255, 0, 255,
255, 0, 0, 255,
255, 255, 255, 255
};
auto region = FUpdateTextureRegion2D(0, 0, 0, 0, 2, 2);
UpdateTextureRegion(RenderTarget, 0, 1, region, 2*4, 4, rgbw);
Attach the Render Target to a plane in the editor and it should hopefully look like this when all is said and done
And with that you should have some functioning code to play with.

Related

How to use glImportMemoryWin32HandleEXT to share an ID3D11Texture2D KeyedMutex Shared handle with OpenGL?

I am investigating how to do cross-process interop with OpenGL and Direct3D 11 using the EXT_external_objects, EXT_external_objects_win32 and EXT_win32_keyed_mutex OpenGL extensions. My goal is to share a B8G8R8A8_UNORM texture (an external library expects BGRA and I can not change it. What's relevant here is the byte depth of 4) with 1 Mip-level allocated and written to offscreen with D3D11 by one application, and render it with OpenGL in another. Because the texture is being drawn to off-process, I can not use WGL_NV_DX_interop2.
My actual code can be seen here and is written in C# with Silk.NET. For illustration's purpose though, I will describe my problem in psuedo-C(++).
First I create my texture in Process A with D3D11, and obtain a shared handle to it, and send it over to process B.
#define WIDTH 100
#define HEIGHT 100
#define BPP 4 // BGRA8 is 4 bytes per pixel
ID3D11Texture2D *texture;
D3D11_TEXTURE2D_DESC texDesc = {
.Width = WIDTH,
.Height = HEIGHT,
.MipLevels = 1,
.ArraySize = 1,
.Format = DXGI_FORMAT_B8G8R8A8_UNORM,
.SampleDesc = { .Count = 1, .Quality = 0 }
.Usage = USAGE_DEFAULT,
.BindFlags = BIND_SHADER_RESOURCE
.CPUAccessFlags = 0,
.MiscFlags = D3D11_RESOURCE_MISC_SHARED_NTHANDLE | D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX
};
device->CreateTexture2D(&texDesc, NULL, &texture);
HANDLE sharedHandle;
texture->CreateSharedHandle(NULL, DXGI_SHARED_RESOURCE_READ, NULL, &sharedHandle);
SendToProcessB(sharedHandle, pid);
In Process B, I first duplicate the handle to get one that's process-local.
HANDLE localSharedHandle;
HANDLE hProcA = OpenProcess(PROCESS_DUP_HANDLE, false, processAPID);
DuplicateHandle(hProcA, sharedHandle, GetCurrentProcess(), &localSharedHandle, 0, false, DUPLICATE_SAME_ACCESS);
CloseHandle(hProcA)
At this point, I have a valid shared handle to the DXGI resource in localSharedHandle. I have a D3D11 implementation of ProcessB that is able to successfully render the shared texture after opening with OpenSharedResource1. My issue here is OpenGL however.
This is what I am currently doing for OpenGL
GLuint sharedTexture, memObj;
glCreateTextures(GL_TEXTURE_2D, 1, &sharedTexture);
glTextureParameteri(sharedTexture, TEXTURE_TILING_EXT, OPTIMAL_TILING_EXT); // D3D11 side is D3D11_TEXTURE_LAYOUT_UNDEFINED
// Create the memory object handle
glCreateMemoryObjectsEXT(1, &memObj);
// I am not actually sure what the size parameter here is referring to.
// Since the source texture is DX11, there's no way to get the allocation size,
// I make a guess of W * H * BPP
// According to docs for VkExternalMemoryHandleTypeFlagBitsNV, NtHandle Shared Resources use HANDLE_TYPE_D3D11_IMAGE_EXT
glImportMemoryWin32HandleEXT(memObj, WIDTH * HEIGHT * BPP, GL_HANDLE_TYPE_D3D11_IMAGE_EXT, (void*)localSharedHandle);
DBG_GL_CHECK_ERROR(); // GL_NO_ERROR
Checking for errors along the way seems to indicate the import was successful. However I am not able to bind the texture.
if (glAcquireKeyedMutexWin32EXT(memObj, 0, (UINT)-1) {
DBG_GL_CHECK_ERROR(); // GL_NO_ERROR
glTextureStorageMem2D(sharedTexture, 1, GL_RGBA8, WIDTH, HEIGHT, memObj, 0);
DBG_GL_CHECK_ERROR(); // GL_INVALID_VALUE
glReleaseKeyedMutexWin32EXT(memObj, 0);
}
What goes wrong is the call to glTextureStorageMem2D. The shared KeyedMutex is being properly acquired and released. The extension documentation is unclear as to how I'm supposed to properly bind this texture and draw it.
After some more debugging, I managed to get [DebugSeverityHigh] DebugSourceApi: DebugTypeError, id: 1281: GL_INVALID_VALUE error generated. Memory object too small from the Debug context. By dividing my width in half I was able to get some garbled output on the screen.
It turns out the size needed to import the texture was not WIDTH * HEIGHT * BPP, (where BPP = 4 for BGRA in this case), but WIDTH * HEIGHT * BPP * 2. Importing the handle with size WIDTH * HEIGHT * BPP * 2 allows the texture to properly bind and render correctly.

C++ DirectX DrawText in multiple colors

I use ID3DXFont interface to draw text and that perfectly suits my needs as long as complete string is in single color. Now I'd wish to draw a string but in multiple colors. For instance "abc", with a in red, b in yellow, etc.
I know that I could draw each letter on its own, giving a different Color parameter to DrawText each time. The only issue with this is that I do not know how many pixels should I offset after each letter because every letter has a different width. Hardcoding widths is not really a good solution.
The ID3DXFont interface doesn't allow you to draw multiple colors within a single invocation of DrawText. However, it can give you the bounding rectangles of any text that you wish to draw using the DT_CALCRECT flag, so you do not need to hardcode widths of particular glyphs within your font. This also means you can switch the font and/or size of the font without needing to modify your drawing code, or hardcoding new width. For example:
ID3DXFont* font = ...;
const char* strings[] = { "A", "i", "C" };
D3DCOLOR colors[] = { D3DCOLOR_ARGB(255, 255, 0, 0), D3DCOLOR_ARGB(255, 0, 255, 0), D3DCOLOR_ARGB(255, 0, 0, 255) };
RECT r = { 10,10,0,0}; // starting point
for (int i = 0; i < _countof(strings); ++i)
{
font->DrawText(NULL, strings[i], -1, &r, DT_CALCRECT, 0);
font->DrawText(NULL, strings[i], -1, &r, DT_NOCLIP, colors[i]);
r.left = r.right; // offset for next character.
}
Note: I have used 'i' instead of 'b' from your example, because it makes it apparent that the rectangles are correct, as 'i' is (generally) a very thin glyph. Also note that this assumes a single line of text. The calculated rectangle also includes height, so if you are doing multiple lines, you could also use the height of the calculated rectangle to offset the position.

OpenGL: wglGetProcAddress always returns NULL

I am trying to create an OpenGL program which uses shaders. I tried using the following code to load the shader functions, but wglGetProcAddress always returns 0 no matter what I do.
The rest of the program works as normal when not using the shader functions.
HDC g_hdc;
HGLRC g_hrc;
PFNGLATTACHSHADERPROC glpf_attachshader;
PFNGLCOMPILESHADERPROC glpf_compileshader;
PFNGLCREATEPROGRAMPROC glpf_createprogram;
PFNGLCREATESHADERPROC glpf_createshader;
PFNGLDELETEPROGRAMPROC glpf_deleteprogram;
PFNGLDELETESHADERPROC glpf_deleteshader;
PFNGLDETACHSHADERPROC glpf_detachshader;
PFNGLLINKPROGRAMPROC glpf_linkprogram;
PFNGLSHADERSOURCEPROC glpf_shadersource;
PFNGLUSEPROGRAMPROC glpf_useprogram;
void GL_Init(HDC dc)
{
//create pixel format
PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW | PFD_SUPPORT_OPENGL | PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
32,
0, 0, 0, 0, 0, 0,
0, 0, 0,
0, 0, 0, 0,
32, 0, 0,
PFD_MAIN_PLANE,
0, 0, 0, 0
};
//choose + set pixel format
int pixfmt = ChoosePixelFormat(dc, &pfd);
if (pixfmt && SetPixelFormat(dc, pixfmt, &pfd))
{
//create GL render context
if (g_hrc = wglCreateContext(dc))
{
g_hdc = dc;
//select GL render context
wglMakeCurrent(dc, g_hrc);
//get function pointers
glpf_attachshader = (PFNGLATTACHSHADERPROC) wglGetProcAddress("glAttachShader");
glpf_compileshader = (PFNGLCOMPILESHADERPROC) wglGetProcAddress("glCompileShader");
glpf_createprogram = (PFNGLCREATEPROGRAMPROC) wglGetProcAddress("glCreateProgram");
glpf_createshader = (PFNGLCREATESHADERPROC) wglGetProcAddress("glCreateShader");
glpf_deleteprogram = (PFNGLDELETEPROGRAMPROC) wglGetProcAddress("glDeleteProgram");
glpf_deleteshader = (PFNGLDELETESHADERPROC) wglGetProcAddress("glDeleteShader");
glpf_detachshader = (PFNGLDETACHSHADERPROC) wglGetProcAddress("glDetachShader");
glpf_linkprogram = (PFNGLLINKPROGRAMPROC) wglGetProcAddress("glLinkProgram");
glpf_shadersource = (PFNGLSHADERSOURCEPROC) wglGetProcAddress("glShaderSource");
glpf_useprogram = (PFNGLUSEPROGRAMPROC) wglGetProcAddress("glUseProgram");
}
}
}
I know this may be a possible duplicate, but on most of the other posts the error was because of simple mistakes (like calling wglGetProcAddress before wglMakeCurrent). I'm in a bit of an unique situation - any help would be appreciated.
You are requesting a 32-bit Z-Buffer in this code. That will probably throw you onto the GDI software rasterizer (which implements exactly 0 extensions). You can use 32-bit depth buffers on modern hardware, but most of the time you cannot do it using the default framebuffer, you have to use FBOs.
I have seen some drivers accept 32-bit depth only to fallback to the closest matching 24-bit pixel format, while others simply refuse to give a hardware pixel format at all (this is probably your case). If this is in fact your problem, a quick investigation of your GL strings (GL_RENDERER, GL_VERSION, GL_VENDOR) should make it obvious.
24-bit Depth + 8-bit Stencil is pretty much universally supported, this is the first depth/stencil size you should try.
You could use glew to do that kind of work for you.
Or maybe you don't want?
EDIT:
You could maybe use glext.h then. Or at least look in it and copy paste what interest you.
I'm not an expert I just remember having that kind of problem and try to remind what was conected to it.

Allegro 5 - creating a custom bitmap with alpha channel

Afternoon everyone,
I was wondering if there's any way I could create a custom bitmap with alpha channel
bitmap = al_create_bitmap(30, 30);
al_set_target_bitmap(bitmap);
al_clear_to_color(al_map_rgb(255,255,255));
....
al_draw_tinted_bitmap(bitmap, al_map_rgba(0, 0, 0, 0.5), X, Y, 0);
I'm sure that I'm either not creating or drawing the bitmap correctly, so I could really use some advice.
Thanks in advance,
Alex
The only thing wrong with your code snippet is:
al_map_rgba(0, 0, 0, 0.5)
should be:
al_map_rgba_f(0, 0, 0, 0.5)
The former range is an integer from 0 to 255.
Also, keep in mind that Allegro's default blender is pre-multiplied alpha. So if you wanted to tint red at 50%, you'd use:
float a = 0.5;
... al_map_rgba_f(1.0 * a, 0.0 * a, 0.0 * a, a) ...
If you're not thinking about it, you're probably assuming it's interpolating. i.e., the more intuitive blender for most people seems to be:
al_set_blender(ALLEGRO_ADD, ALLEGRO_ALPHA, ALLEGRO_INVERSE_ALPHA)
but that is not the default for the reasons mentioned in the above link.
after I set the
al_set_blender(ALLEGRO_ADD, ALLEGRO_ALPHA, ALLEGRO_INVERSE_ALPHA);
it allowed me to draw my "bouncer" bitmap and change its alpha channel using the below function:
al_draw_tinted_bitmap(bouncer, al_map_rgba_f(1, 1, 1, alpha) 40, 0, 0);
This previously did not work , so I guess adding the al_set_blender solved the "mistery".
Thanks for all your help.

OpenGL texture colors are wrong

I've made a simple program that cretes an Ortho perspective, and puts a texture containing a png on a quad
However, I can't figure out why some of the colors are displayed all jumbled.
The png looks like this (the white rectangle in the middle is transparent):
The quad in my OpenGL program looks like this:
Below is the code for initializing OpenGL as well as what goes on in the method called by the OpenGL thread.
I'm using JOGL.
public void init(GLAutoDrawable gLDrawable) {
gl.glGenTextures(1, textureId, 0);
gl.glBindTexture(GL2.GL_TEXTURE_2D, textureId[0]);
gl.glTexParameterf(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST);
gl.glTexParameterf(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR);
gl.glTexParameterf(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_S, GL2.GL_REPEAT);
gl.glTexParameterf(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_T, GL2.GL_REPEAT);
BufferedImage image = null;
try {
image = ImageIO.read(new File("d:\\temp\\projects\\openglTest1\\texTest.png"));
} catch (IOException e1) {e1.printStackTrace();}
DataBufferByte dataBufferByte = (DataBufferByte) image.getRaster().getDataBuffer();
Buffer imageBuffer = ByteBuffer.wrap(dataBufferByte.getData());
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL2.GL_RGBA, image.getWidth(), image.getHeight(), 0, GL2.GL_RGBA, GL.GL_UNSIGNED_BYTE, imageBuffer);
gl.glEnable(GL2.GL_TEXTURE_2D);
gl.glBlendFunc(GL2.GL_ONE, GL2.GL_ONE_MINUS_SRC_ALPHA);
gl.glEnable(GL2.GL_BLEND_SRC);
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
gl.glClearDepth(1.0f);
gl.glEnable(GL.GL_DEPTH_TEST);
gl.glDepthFunc(GL.GL_LEQUAL);
gl.glHint(GL2ES1.GL_PERSPECTIVE_CORRECTION_HINT, GL.GL_NICEST);
}
//this is called by the OpenGL Thread
public void display(GLAutoDrawable gLDrawable) {
gl.glClear(GL.GL_COLOR_BUFFER_BIT);
gl.glClear(GL.GL_DEPTH_BUFFER_BIT);
gl.glEnableClientState(GLPointerFunc.GL_VERTEX_ARRAY);
gl.glEnableClientState(GLPointerFunc.GL_TEXTURE_COORD_ARRAY);
gl.glFrontFace(GL2.GL_CCW);
gl.glVertexPointer(3, GL.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL.GL_FLOAT, 0, textureBuffer);
gl.glDrawElements(GL.GL_TRIANGLES, indices.length, GL.GL_UNSIGNED_BYTE, indexBuffer);
gl.glDisableClientState(GL2.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL2.GL_TEXTURE_COORD_ARRAY);
}
This is puzzling to me because, while I'm not an OpenGL expert I tried to understand what all the above OpenGL commands do before using them. In fact, I've dont the same thing on Android, and everything is displayed ok, but when doing it in Java with JOGL I get this result described here. The only thing I'm doing different is the way I load the png image. On Adroid there's a helper method like this:
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmapStatic, 0);
while with JOGL I'm doing my own loading via:
try {
image = ImageIO.read(new File("d:\\temp\\projects\\openglTest1\\texTest.png"));
} catch (IOException e1) {e1.printStackTrace();}
DataBufferByte dataBufferByte = (DataBufferByte) image.getRaster().getDataBuffer();
Buffer imageBuffer = ByteBuffer.wrap(dataBufferByte.getData());
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL2.GL_RGBA, image.getWidth(), image.getHeight(), 0, GL2.GL_RGBA, GL.GL_UNSIGNED_BYTE, imageBuffer);
as detailed above.
==UPDATE==
As per jcadam's comment, I've tried setting the format of the pixel data to GL_BGRA like so:
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL2.GL_RGBA, image.getWidth(), image.getHeight(), 0, GL2.GL_BGRA, GL.GL_UNSIGNED_BYTE, imageBuffer);
The colors are still jumbled, but it's a different jumble this time:
How can I find out what particular format my png image is in?
== UPDATE 2 - solution implementation ==
Ok, first, I want to thank jcadam, rotoglup and Tim for pointing me in the right direction.
In short, the issue was that the way in which Java is ordering the pixels when decoding an image is not always the good order for passing to OpenGL. More precisely, if you do not have an Alpha Channel in your image, then it's ok but if you do have an alpha channel the order is bad and some colors will be jumbled.
Now, I started off by making my own manual implementation which works ok for 32bit PNGs and 24 bit JPEGs:
public void texImage2D(File imageLocation,GL gl) {
BufferedImage initialImage = null;
try {
initialImage = ImageIO.read(imageLocation);
} catch (IOException e1) {
throw new RuntimeException(e1.getMessage(), e1);
}
int imgHeight = initialImage.getHeight(null);
int imgWidth = initialImage.getWidth(null);
ColorModel cm = initialImage.getColorModel();
boolean hasAlpha = cm.hasAlpha();
Buffer buffer = null;
int openGlInternalFormat = -1;
int openGlImageFormat = -1;
if(!hasAlpha) {
DataBufferByte dataBufferByte = (DataBufferByte) initialImage.getRaster().getDataBuffer();
buffer = ByteBuffer.wrap(dataBufferByte.getData());
openGlInternalFormat = GL2.GL_RGB;
openGlImageFormat = GL2.GL_BGR;
} else {
openGlInternalFormat = GL2.GL_RGBA;
openGlImageFormat = GL2.GL_RGBA;
WritableRaster raster = Raster.createInterleavedRaster(DataBuffer.TYPE_BYTE, imgWidth, imgHeight, 4, null);
ComponentColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB),
new int[] { 8, 8, 8, 8 },
true, false,
ComponentColorModel.TRANSLUCENT,
DataBuffer.TYPE_BYTE);
BufferedImage bufImg = new BufferedImage(colorModel,
raster, false,
null);
Graphics2D g = bufImg.createGraphics();
g.drawImage(initialImage, null, null);
DataBufferByte imgBuf = (DataBufferByte) raster.getDataBuffer();
byte[] bytes = imgBuf.getData();
buffer = ByteBuffer.wrap(bytes);
g.dispose();
}
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, openGlInternalFormat, imgWidth, imgHeight, 0, openGlImageFormat, GL.GL_UNSIGNED_BYTE, buffer);
}
however I later found out that JOGL has its own helper tools for this, and this is in fact what I ended up using:
//this code should be called in init(), to load the texture:
InputStream stream = new FileInputStream("d:\\temp\\projects\\openglTest1\\texTest.png");
TextureData data = TextureIO.newTextureData(gl.getGLProfile(),stream, false, "png");
Texture myTexture = TextureIO.newTexture(data);
//this code should be called in the draw/display method, before the vertices drawing call
myTexture.enable(gl);
myTexture.bind(gl);
It looks like ABGR to me. If you just look at the colors:
png red (A1,B0,G0,R1) looks like
opengl red (R1,G0,B0,A1)
png bluegreen (A1, B1, G1, R0) looks like
opengl white (R1, G1, B1, A0)
png blue (A1, B1, G0, R0) looks like
opengl yellow (R1, G1, B0, A0)
png clear (A0, B?, G?, R?) could be
ogl bluegreen (R0, B?, G?, A?)
If opengl transparency is disabled then the alpha channel wouldn't matter.
Hmm... It looks like a pixel format problem. You could get more specific and try GL_RGBA8, GL_RGBA16, etc. Is this an 8-bit PNG rather than 24 or 32? Is there not an alpha channel (in which case use GL_RGB rather than GL_RGBA)?
Just out of a quick search (I don't have any actual experience with Java ImageIO), it seems that Java has a native ARGB byte ordering, you may take a look at this source code for inspiration.