Related
i was trying to use textureview for framebuffer
it is ok using textureview that use base mipmap level, but when textureview specify mipmap level more that baselevel ,
glCheckFramebufferStatus() return error code 36057 that is incomplete_dimension
i checked there is no missmatching between textureivew and origintexture'size
here is code that is making prefilterCubemap for PBR Rendering
unsigned int maxMipLevels = 5;
for (int mip = 0; mip < maxMipLevels; ++mip)
{
unsigned int mipWidth = 128 * std::pow(0.5, mip);
unsigned int mipHeight = 128 * std::pow(0.5, mip);
RenderCommand::SetViewport(0, 0, mipWidth, mipHeight);
m_DepthTexture->SetSize(mipWidth, mipHeight);
CubeMapPass->DetachAll();
CubeMapPass->AttachTexture(m_DepthTexture, AttachmentType::Depth_Stencil, TextureType::Texture2D, 0);
float roughtness = (float)mip / (float)(maxMipLevels - 1);
roughBuffer.roughtness = roughtness;
roughnessConstantBuffer->SetData(&roughBuffer,sizeof(roughBuffer),0);
for (int i = 0; i < 6; ++i)
{
....
...
CameraBuffer.view = captureViews[i];
cameraConstantBuffer->SetData(&CameraBuffer, sizeof(CameraData), 0);
CubeMapPass->AttachTexture(m_PrefilterMap, AttachmentType::Color_0, TextureType::Texture2D, mip,i,1);
CubeMapPass->Clear();
....
...
}
}
DepthStencil texture's size is changed for face texture of cubemap
based on mipmap(cubemap texture size starts at 128x128)
in AttachTexture Function
// m_RendererID is framebufferid
Ref<OpenGLRenderTargetView> targetview=CreateRef<OpenGLRenderTargetView>(type, texture, texture->GetDesc().Format, mipLevel, layerStart, layerLevel);
targetview->Bind(m_RendererID, attachType);
in texture view's bind function
GLenum attachmentIndex = Utils::GetAttachmentType(type);
GLenum target = Utils::GetTextureTarget(viewDesc.Type, multisampled);
glNamedFramebufferTexture(framebufferid, attachmentIndex, m_renderID, m_startMip);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
as i said before there is no problem when textureview is accessing basemiplevel
but after first loop , textureview is accessing next mipmap (that textureSize will be 64x64, and depthStencilTexture size also 64x64)
and framebuffer has two attachment one is color(textureView) and other is depthstencil textrue
but glCheckFramebufferStatus keeps telling me there is incompletedimenssion after first loop..
i searched about textureview , https://www.khronos.org/opengl/wiki/Texture_Storage
in this article
glTextureView(..., GLuint minlevel, GLuint numlevels, GLuint minlayer, GLuint numlayers)
function will make textureview for origintexture and minlevel gonna be baselevel mipmap on textureview
as you can see my AttachTexture function it takes miplevel and next parameter is for arraylevel and arraycount
and in creation textureview
glTextureView(m_renderID, target, srcTexID, internalformat, startMip, 1, startLayer, numlayer);
it only takes one mipmap not two or three
i dont know why it doesnt work..
and it works when im not using texture view
like below code
if (texture->GetDesc().Type == TextureType::TextureCube &&type == TextureType::Texture2D)
{
target = GL_TEXTURE_CUBE_MAP_POSITIVE_X + layerStart;
}
if (type == TextureType::TextureCube)
glFramebufferTexture(GL_FRAMEBUFFER, attachmentIndex, texId, mipLevel);
else
glFramebufferTexture2D(GL_FRAMEBUFFER, attachmentIndex, target, texId, mipLevel);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
Is it valid for a KTX image to be a cubemaps arrays, or is that not a thing?
I have some code that I'm currently using for uploading the data from a KTX file to the GPU. Currently, the code works for a regular 2d image, a cubemap, and a texture array. However, it would not support a KTX image that is a cubemap array, if that is a thing.
If it is possible, what is the code below missing to accomplish that?
uint32_t offset = 0;
for (uint32_t layer = 0; layer < layers; layer++) {
for (uint32_t face = 0; face < faces; face++) {
for (uint32_t level = 0; level < mipLevels; level++) {
offset = tex->GetImageOffset(layer, face, level);
vk::BufferImageCopy bufferCopyRegion = {};
bufferCopyRegion.imageSubresource.aspectMask = vk::ImageAspectFlagBits::eColor;
bufferCopyRegion.imageSubresource.mipLevel = level;
bufferCopyRegion.imageSubresource.baseArrayLayer = (faces == 6 ? face : layer); // TexArray or Cubemap, not both.
bufferCopyRegion.imageSubresource.layerCount = 1;
bufferCopyRegion.imageExtent.width = width >> level;;
bufferCopyRegion.imageExtent.height = height >> level;
bufferCopyRegion.imageExtent.depth = 1;
bufferCopyRegion.bufferOffset = offset;
bufferCopyRegions.push_back(bufferCopyRegion);
}
}
}
Vulkan command to transfer the image.
// std::vector<vk::BufferImageCopy> regions;
cmdBuf->copyBufferToImage(srcBufferHandle, destImageHandle,
vk::ImageLayout::eTransferDstOptimal, uint32_t(regions.size()), regions.data());
Yes, KTX also supports cube map arrays (see the KTX specification). Those are stored using layers.
The Vulkan spec states the following on how cube maps are stored in a cube map array:
For cube arrays, each set of six sequential
layers is a single cube, so the number of cube maps in a cube map array view is layerCount / 6, and
image array layer (baseArrayLayer + i) is face index (i mod 6) of cube i / 6.
So you need to change the baseArrayLayer of your buffer copy region accordingly.
Sample code:
// Setup buffer copy regions to get the data from the ktx file to your own image
for (uint32_t layer = 0; layer < ktxTexture->numLayers; layer++) {
for (uint32_t face = 0; face < 6; face++) {
for (uint32_t level = 0; level < ktxTexture->numLevels; level++) {
ktx_size_t offset;
KTX_error_code ret = ktxTexture_GetImageOffset(ktxTexture, level, layer, face, &offset);
VkBufferImageCopy bufferCopyRegion = {};
bufferCopyRegion.imageSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
bufferCopyRegion.imageSubresource.mipLevel = level;
bufferCopyRegion.imageSubresource.baseArrayLayer = layer * 6 + face;
bufferCopyRegion.imageSubresource.layerCount = 1;
bufferCopyRegion.imageExtent.width = ktxTexture->baseWidth >> level;
bufferCopyRegion.imageExtent.height = ktxTexture->baseHeight >> level;
bufferCopyRegion.imageExtent.depth = 1;
bufferCopyRegion.bufferOffset = offset;
bufferCopyRegions.push_back(bufferCopyRegion);
}
}
}
// Create the image view for a cube map array
VkImageViewCreateInfo view = vks::initializers::imageViewCreateInfo();
view.viewType = VK_IMAGE_VIEW_TYPE_CUBE_ARRAY;
view.format = format;
view.components = { VK_COMPONENT_SWIZZLE_R, VK_COMPONENT_SWIZZLE_G, VK_COMPONENT_SWIZZLE_B, VK_COMPONENT_SWIZZLE_A };
view.subresourceRange = { VK_IMAGE_ASPECT_COLOR_BIT, 0, 1, 0, 1 };
view.subresourceRange.layerCount = 6 * cubeMap.layerCount;
view.subresourceRange.levelCount = cubeMap.mipLevels;
view.image = cubeMap.image;
vkCreateImageView(device, &view, nullptr, &cubeMap.view);
I am currently trying to load an FBX mesh for use with DirectX, but my FBX file has it's UVs stored by vert index and the normals stored by control point index. How do I know which vertexes have which control point's values?
My code for loading positions, uvs and normals is ripped straight from the fbx example code, but I can post it if needed.
edit: as requested, here are the parts of my code I am talking about.
The UV code will go into the if statement for mapping mode by vert index, while the normal code is set to mapping mode by ctrl point
//load uvs
if (lUVElement->GetMappingMode() == FbxGeometryElement::eByControlPoint)
{
for (int lPolyIndex = 0; lPolyIndex < lPolyCount; ++lPolyIndex)
{
// build the max index array that we need to pass into MakePoly
const int lPolySize = mesh->GetPolygonSize(lPolyIndex);
for (int lVertIndex = 0; lVertIndex < lPolySize; ++lVertIndex)
{
//get the index of the current vertex in control points array
int lPolyVertIndex = mesh->GetPolygonVertex(lPolyIndex, lVertIndex);
//the UV index depends on the reference mode
int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyVertIndex) : lPolyVertIndex;
lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex);
_floatVec->push_back((float)lUVValue.mData[0]);
_floatVec->push_back((float)lUVValue.mData[1]);
}
}
}
else if (lUVElement->GetMappingMode() == FbxGeometryElement::eByPolygonVertex)
{
int lPolyIndexCounter = 0;
for (int lPolyIndex = 0; lPolyIndex < lPolyCount; ++lPolyIndex)
{
// build the max index array that we need to pass into MakePoly
const int lPolySize = mesh->GetPolygonSize(lPolyIndex);
for (int lVertIndex = 0; lVertIndex < lPolySize; ++lVertIndex)
{
if (lPolyIndexCounter < lIndexCount)
{
//the UV index depends on the reference mode
int lUVIndex = lUseIndex ? lUVElement->GetIndexArray().GetAt(lPolyIndexCounter) : lPolyIndexCounter;
lUVValue = lUVElement->GetDirectArray().GetAt(lUVIndex);
_floatVec->push_back((float)lUVValue.mData[0]);
_floatVec->push_back((float)lUVValue.mData[1]);
lPolyIndexCounter++;
}
}
}
}
//and now normals
if (lNormalElement->GetMappingMode() == FbxGeometryElement::eByControlPoint)
{
//Let's get normals of each vertex, since the mapping mode of normal element is by control point
for (int lVertexIndex = 0; lVertexIndex < mesh->GetControlPointsCount(); lVertexIndex++)
{
int test = mesh->GetControlPointsCount();
int lNormalIndex = 0;
//reference mode is direct, the normal index is same as vertex index.
//get normals by the index of control vertex
if (lNormalElement->GetReferenceMode() == FbxGeometryElement::eDirect)
lNormalIndex = lVertexIndex;
//reference mode is index-to-direct, get normals by the index-to-direct
if (lNormalElement->GetReferenceMode() == FbxGeometryElement::eIndexToDirect)
lNormalIndex = lNormalElement->GetIndexArray().GetAt(lVertexIndex);
//Got normals of each vertex.
FbxVector4 lNormal = lNormalElement->GetDirectArray().GetAt(lNormalIndex);
_floatVec->push_back((float)lNormal[0]);
_floatVec->push_back((float)lNormal[1]);
_floatVec->push_back((float)lNormal[2]);
}
}
else if (lNormalElement->GetMappingMode() == FbxGeometryElement::eByPolygonVertex)
{
//etc... code wont go here
}
}
}
}
So how can I know which vertexes will have which normals?
I am trying to read the pixel data from a populated UTexture2D in an Unreal Engine C++ project. Before I post the question here, I tried to use the method described in this link: https://answers.unrealengine.com/questions/25594/accessing-pixel-values-of-texture2d.html. However, it doesn't work for me. All pixel values I got from the texture are some garbage data.
I just want to get the depth values from the SceneCapture2D and a post-processing material that contains SceneTexture: Depth node. I need the depth values available in C++ so that I can do further processing with OpenCV. In Directx11, staging texture can be used for CPU read, but in the unreal engine, I don't know how to create a 'staging texture' like Dx11 has. I can't get the correct pixel values from the current method which makes me think I may try to access a no-CPU-readable texture.
Here is my experimental code for reading data back from an RGB UTexture2D.
Initialize the RGB Texture:
VideoTextureColor= UTexture2D::CreateTransient(640, 480, PF_B8G8R8A8);
VideoTextureColor->UpdateResource();
VideoUpdateTextureRegionColor = new FUpdateTextureRegion2D(0, 0, 0, 0, 640, 480);
ColorRegionData = new FUpdateTextureRegionsData;
PixelDepthData.Init(FColor(0, 0, 0, 255), 640 * 480);
// Populate the texture with blue color
for (int i = 0; i < 640; i++) {
for (int j = 0; j < 480; j++) {
int idx = j * 640 + i;
PixelDepthData[idx].B = 255;
PixelDepthData[idx].G = 0;
PixelDepthData[idx].R = 0;
PixelDepthData[idx].A = 255;
}
}
UpdateTextureRegions(
VideoTextureColor,
(int32)0,
(uint32)1,
VideoUpdateTextureRegionColor,
(uint32)(4 * 640),
(uint32)4,
(uint8*)PixelDepthData.GetData(),
false,
ColorRegionData
);
Then, update read its value back to the PixelDepthData (TArray type) array and update this texture with values storing in the PixelDepthData, which is its old value.
UpdateTextureRegions(
VideoTextureColor,
(int32)0,
(uint32)1,
VideoUpdateTextureRegionColor,
(uint32)(4 * 640),
(uint32)4,
(uint8*)PixelDepthData.GetData(),
false,
ColorRegionData
);
ENQUEUE_UNIQUE_RENDER_COMMAND_ONEPARAMETER(
FRealSenseDelegator,
ARealSenseDelegator*, RealSenseDelegator, this,
{
FColor* tmpImageDataPtr = static_cast<FColor*>((RealSenseDelegator->VideoTextureColor)->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_ONLY));
for (uint32 j = 0; j < 480; j++) {
for (uint32 i = 0; i < 640; i++) {
uint32 idx = j * 640 + i;
RealSenseDelegator->PixelDepthData[idx] = tmpImageDataPtr[idx];
RealSenseDelegator->PixelDepthData[idx].A = 255;
}
}
(RealSenseDelegator->VideoTextureColor)->PlatformData->Mips[0].BulkData.Unlock();
}
);
All I got is a white color texture instead of a blue color texture in the visualization scene.
Does anyone know how to read the data of the UTexture2D Object?
I figured that out. You have to get the UTexture2D's RHI texture reference first, and then use RHILockTexture2D to read it's data, and you have to do it in the RenderThread. The following code just an example:
FTexture2DResource* uTex2DRes = (FTexture2DResource*)(RealSenseDelegator->VideoTexturePixelDepth)->Resource;
float* cpuDataPtr = (float*)RHILockTexture2D(
uTex2DRes->GetTexture2DRHI(),
0,
RLM_ReadOnly,
destStride,
false);
for (uint32 j = 0; j < 480; j++) {
for (uint32 i = 0; i < 640; i++) {
uint32 idx = j * 640 + i;
// TODO Read the pixel data right here
}
}
RHIUnlockTexture2D(uTex2DRes->GetTexture2DRHI(), 0, false);
To do this in the Render Thread, you have to use the Macro such as ENQUEUE_UNIQUE_RENDER_COMMAND_ONEPARAMETER // If you just one to pass one parameter to the render thread, use this one.+
I'd like to get a list of all the uniforms & attribs used by a shader program object. glGetAttribLocation() & glGetUniformLocation() can be used to map a string to a location, but what I would really like is the list of strings without having to parse the glsl code.
Note: In OpenGL 2.0 glGetObjectParameteriv() is replaced by glGetProgramiv(). And the enum is GL_ACTIVE_UNIFORMS & GL_ACTIVE_ATTRIBUTES.
Variables shared between both examples:
GLint i;
GLint count;
GLint size; // size of the variable
GLenum type; // type of the variable (float, vec3 or mat4, etc)
const GLsizei bufSize = 16; // maximum name length
GLchar name[bufSize]; // variable name in GLSL
GLsizei length; // name length
Attributes
glGetProgramiv(program, GL_ACTIVE_ATTRIBUTES, &count);
printf("Active Attributes: %d\n", count);
for (i = 0; i < count; i++)
{
glGetActiveAttrib(program, (GLuint)i, bufSize, &length, &size, &type, name);
printf("Attribute #%d Type: %u Name: %s\n", i, type, name);
}
Uniforms
glGetProgramiv(program, GL_ACTIVE_UNIFORMS, &count);
printf("Active Uniforms: %d\n", count);
for (i = 0; i < count; i++)
{
glGetActiveUniform(program, (GLuint)i, bufSize, &length, &size, &type, name);
printf("Uniform #%d Type: %u Name: %s\n", i, type, name);
}
OpenGL Documentation / Variable Types
The various macros representing variable types can be found in the
docs. Such as GL_FLOAT, GL_FLOAT_VEC3, GL_FLOAT_MAT4, etc.
glGetActiveAttrib
glGetActiveUniform
There has been a change in how this sort of thing is done in OpenGL. So let's present the old way and the new way.
Old Way
Linked shaders have the concept of a number of active uniforms and active attributes (vertex shader stage inputs). These are the uniforms/attributes that are in use by that shader. The number of these (as well as quite a few other things) can be queried with glGetProgramiv:
GLint numActiveAttribs = 0;
GLint numActiveUniforms = 0;
glGetProgramiv(prog, GL_ACTIVE_ATTRIBUTES, &numActiveAttribs);
glGetProgramiv(prog, GL_ACTIVE_UNIFORMS, &numActiveUniforms);
You can query active uniform blocks, transform feedback varyings, atomic counters, and similar things in this way.
Once you have the number of active attributes/uniforms, you can start querying information about them. To get info about an attribute, you use glGetActiveAttrib; to get info about a uniform, you use glGetActiveUniform. As an example, extended from the above:
GLint maxAttribNameLength = 0;
glGetProgramiv(prog, GL_ACTIVE_ATTRIBUTE_MAX_LENGTH, &maxAttribNameLength);
std::vector<GLchar> nameData(maxAttribNameLength)
for(int attrib = 0; attrib < numActiveAttribs; ++attrib)
{
GLint arraySize = 0;
GLenum type = 0;
GLsizei actualLength = 0;
glGetActiveAttrib(prog, attrib, nameData.size(), &actualLength, &arraySize, &type, &nameData[0]);
std::string name((char*)&nameData[0], actualLength - 1);
}
Something similar can be done for uniforms. However, the GL_ACTIVE_UNIFORM_MAX_LENGTH trick can be buggy on some drivers. So I would suggest this:
std::vector<GLchar> nameData(256);
for(int unif = 0; unif < numActiveUniforms; ++unif)
{
GLint arraySize = 0;
GLenum type = 0;
GLsizei actualLength = 0;
glGetActiveUniform(prog, unif, nameData.size(), &actualLength, &arraySize, &type, &nameData[0]);
std::string name((char*)&nameData[0], actualLength - 1);
}
Also, for uniforms, there's glGetActiveUniforms, which can query all of the name lengths for every uniform all at once (as well as all of the types, array sizes, strides, and other parameters).
New Way
This way lets you access pretty much everything about active variables in a successfully linked program (except for regular globals). The ARB_program_interface_query extension is not widely available yet, but it'll get there.
It starts with a call to glGetProgramInterfaceiv, to query the number of active attributes/uniforms. Or whatever else you may want.
GLint numActiveAttribs = 0;
GLint numActiveUniforms = 0;
glGetProgramInterfaceiv(prog, GL_PROGRAM_INPUT, GL_ACTIVE_RESOURCES, &numActiveAttribs);
glGetProgramInterfaceiv(prog, GL_UNIFORM, GL_ACTIVE_RESOURCES, &numActiveUniforms);
Attributes are just vertex shader inputs; GL_PROGRAM_INPUT means the inputs to the first program in the program object.
You can then loop over the number of active resources, asking for info on each one in turn, from glGetProgramResourceiv and glGetProgramResourceName:
std::vector<GLchar> nameData(256);
std::vector<GLenum> properties;
properties.push_back(GL_NAME_LENGTH);
properties.push_back(GL_TYPE);
properties.push_back(GL_ARRAY_SIZE);
std::vector<GLint> values(properties.size());
for(int attrib = 0; attrib < numActiveAttribs; ++attrib)
{
glGetProgramResourceiv(prog, GL_PROGRAM_INPUT, attrib, properties.size(),
&properties[0], values.size(), NULL, &values[0]);
nameData.resize(values[0]); //The length of the name.
glGetProgramResourceName(prog, GL_PROGRAM_INPUT, attrib, nameData.size(), NULL, &nameData[0]);
std::string name((char*)&nameData[0], nameData.size() - 1);
}
The exact same code would work for GL_UNIFORM; just swap numActiveAttribs with numActiveUniforms.
For anyone out there that finds this question looking to do this in WebGL, here's the WebGL equivalent:
var program = gl.createProgram();
// ...attach shaders, link...
var na = gl.getProgramParameter(program, gl.ACTIVE_ATTRIBUTES);
console.log(na, 'attributes');
for (var i = 0; i < na; ++i) {
var a = gl.getActiveAttrib(program, i);
console.log(i, a.size, a.type, a.name);
}
var nu = gl.getProgramParameter(program, gl.ACTIVE_UNIFORMS);
console.log(nu, 'uniforms');
for (var i = 0; i < nu; ++i) {
var u = gl.getActiveUniform(program, i);
console.log(i, u.size, u.type, u.name);
}
Here is the corresponding code in python for getting the uniforms:
from OpenGL import GL
...
num_active_uniforms = GL.glGetProgramiv(program, GL.GL_ACTIVE_UNIFORMS)
for u in range(num_active_uniforms):
name, size, type_ = GL.glGetActiveUniform(program, u)
location = GL.glGetUniformLocation(program, name)
Apparently the 'new way' mentioned by Nicol Bolas does not work in python.