i was trying to use textureview for framebuffer
it is ok using textureview that use base mipmap level, but when textureview specify mipmap level more that baselevel ,
glCheckFramebufferStatus() return error code 36057 that is incomplete_dimension
i checked there is no missmatching between textureivew and origintexture'size
here is code that is making prefilterCubemap for PBR Rendering
unsigned int maxMipLevels = 5;
for (int mip = 0; mip < maxMipLevels; ++mip)
{
unsigned int mipWidth = 128 * std::pow(0.5, mip);
unsigned int mipHeight = 128 * std::pow(0.5, mip);
RenderCommand::SetViewport(0, 0, mipWidth, mipHeight);
m_DepthTexture->SetSize(mipWidth, mipHeight);
CubeMapPass->DetachAll();
CubeMapPass->AttachTexture(m_DepthTexture, AttachmentType::Depth_Stencil, TextureType::Texture2D, 0);
float roughtness = (float)mip / (float)(maxMipLevels - 1);
roughBuffer.roughtness = roughtness;
roughnessConstantBuffer->SetData(&roughBuffer,sizeof(roughBuffer),0);
for (int i = 0; i < 6; ++i)
{
....
...
CameraBuffer.view = captureViews[i];
cameraConstantBuffer->SetData(&CameraBuffer, sizeof(CameraData), 0);
CubeMapPass->AttachTexture(m_PrefilterMap, AttachmentType::Color_0, TextureType::Texture2D, mip,i,1);
CubeMapPass->Clear();
....
...
}
}
DepthStencil texture's size is changed for face texture of cubemap
based on mipmap(cubemap texture size starts at 128x128)
in AttachTexture Function
// m_RendererID is framebufferid
Ref<OpenGLRenderTargetView> targetview=CreateRef<OpenGLRenderTargetView>(type, texture, texture->GetDesc().Format, mipLevel, layerStart, layerLevel);
targetview->Bind(m_RendererID, attachType);
in texture view's bind function
GLenum attachmentIndex = Utils::GetAttachmentType(type);
GLenum target = Utils::GetTextureTarget(viewDesc.Type, multisampled);
glNamedFramebufferTexture(framebufferid, attachmentIndex, m_renderID, m_startMip);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
as i said before there is no problem when textureview is accessing basemiplevel
but after first loop , textureview is accessing next mipmap (that textureSize will be 64x64, and depthStencilTexture size also 64x64)
and framebuffer has two attachment one is color(textureView) and other is depthstencil textrue
but glCheckFramebufferStatus keeps telling me there is incompletedimenssion after first loop..
i searched about textureview , https://www.khronos.org/opengl/wiki/Texture_Storage
in this article
glTextureView(..., GLuint minlevel, GLuint numlevels, GLuint minlayer, GLuint numlayers)
function will make textureview for origintexture and minlevel gonna be baselevel mipmap on textureview
as you can see my AttachTexture function it takes miplevel and next parameter is for arraylevel and arraycount
and in creation textureview
glTextureView(m_renderID, target, srcTexID, internalformat, startMip, 1, startLayer, numlayer);
it only takes one mipmap not two or three
i dont know why it doesnt work..
and it works when im not using texture view
like below code
if (texture->GetDesc().Type == TextureType::TextureCube &&type == TextureType::Texture2D)
{
target = GL_TEXTURE_CUBE_MAP_POSITIVE_X + layerStart;
}
if (type == TextureType::TextureCube)
glFramebufferTexture(GL_FRAMEBUFFER, attachmentIndex, texId, mipLevel);
else
glFramebufferTexture2D(GL_FRAMEBUFFER, attachmentIndex, target, texId, mipLevel);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
Related
I'm not sure which structure layout is most suited for my application: shared, packed,std140, std430. I'm not asking for an explanation of each, that information is easy to find, it's just hard to figure out the impact each will have on vendor compatibility/performance. If shared is the default, I'm suspecting that's a good starting point.
From what I can gather, I have to query the alignment/offsets when using shared or packed, because it's implementation specific.
What's the API for for querying it? Is there some parameter I pass to glGetShaderiv while the compute shader is bound, that lets me figure out the alignments?
Use glGetProgramInterface with the parameter GL_SHADER_STORAGE_BLOCK to get the number of the
Shader Storage Buffer Objects and the maximum name length.
The maximum name length of the buffer variables can be get from the program interface GL_BUFFER_VARIABLE:
GLuint prog_obj; // shader program object
GLint no_of, ssbo_max_len, var_max_len;
glGetProgramInterfaceiv(prog_obj, GL_SHADER_STORAGE_BLOCK, GL_ACTIVE_RESOURCES, &no_of);
glGetProgramInterfaceiv(prog_obj, GL_SHADER_STORAGE_BLOCK, GL_MAX_NAME_LENGTH, &ssbo_max_len);
glGetProgramInterfaceiv(prog_obj, GL_BUFFER_VARIABLE, GL_MAX_NAME_LENGTH, &var_max_len);
The name of the SSBO can be get by glGetProgramResourceName and a resource index by glGetProgramResourceIndex:
std::vector< GLchar >name( max_len );
for( int i_resource = 0; i_resource < no_of; i_resource++ ) {
// get name of the shader storage block
GLsizei strLength;
glGetProgramResourceName(
prog_obj, GL_SHADER_STORAGE_BLOCK, i_resource, ssbo_max_len, &strLength, name.data());
// get resource index of the shader storage block
GLint resInx = glGetProgramResourceIndex(prog_obj, GL_SHADER_STORAGE_BLOCK, name.data());
// [...]
}
Data of the shader storage block can be retrieved by glGetProgramResource. See also Program Introspection.
Get the number of of buffer variables and its indices from program interface and GL_SHADER_STORAGE_BLOCK and the shader storage block resource resInx:
for( int i_resource = 0; i_resource < no_of; i_resource++ ) {
// [...]
GLint resInx = ...
// get number of the buffer variables in the shader storage block
GLenum prop = GL_NUM_ACTIVE_VARIABLES;
GLint num_var;
glGetProgramResourceiv(
prog_obj, GL_SHADER_STORAGE_BLOCK, resInx, 1, &prop,
1, nullptr, &num_var);
// get resource indices of the buffer variables
std::vector<GLint> vars(num_var);
prop = GL_ACTIVE_VARIABLES;
glGetProgramResourceiv(
prog_obj, GL_SHADER_STORAGE_BLOCK, resInx,
1, &prop, (GLsizei)vars.size(), nullptr, vars.data());
// [...]
}
Get the offsets of the buffer variables, in basic machine units, relative to the base of buffer and its names from the program interface GL_BUFFER_VARIABLE and the resource indices vars[]:
for( int i_resource = 0; i_resource < no_of; i_resource++ ) {
// [...]
std::vector<GLint> offsets(num_var);
std::vector<std::string> var_names(num_var);
for (GLint i = 0; i < num_var; i++) {
// get offset of buffer variable relative to SSBO
GLenum prop = GL_OFFSET;
glGetProgramResourceiv(
prog_obj, GL_BUFFER_VARIABLE, vars[i],
1, &prop, (GLsizei)offsets.size(), nullptr, &offsets[i]);
// get name of buffer variable
std::vector<GLchar>var_name(var_max_len);
GLsizei strLength;
glGetProgramResourceName(
prog_obj, GL_BUFFER_VARIABLE, vars[i],
var_max_len, &strLength, var_name.data());
var_names[i] = var_name.data();
}
// [...]
}
See also ARB_shader_storage_buffer_object
I'm testing writing to 2D and 3D textures in compute shaders, outputting a gradient noise texture consisting of 32 bit floats. Writing to a 2D texture works fine, but writing to a 3D texture isn't. Are there additional considerations that need to be made when creating a 3D texture when compared to a 2D texture?
Code of how I'm defining the 3D texture below:
HRESULT BaseComputeShader::CreateTexture3D(UINT width, UINT height, UINT depth, DXGI_FORMAT format, ID3D11Texture3D** texture)
{
D3D11_TEXTURE3D_DESC textureDesc;
ZeroMemory(&textureDesc, sizeof(textureDesc));
textureDesc.Width = width;
textureDesc.Height = height;
textureDesc.Depth = depth;
textureDesc.MipLevels = 1;
textureDesc.Format = format;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_UNORDERED_ACCESS;
textureDesc.CPUAccessFlags = 0;
textureDesc.MiscFlags = 0;
return renderer->CreateTexture3D(&textureDesc, 0, texture);
}
HRESULT BaseComputeShader::CreateTexture3DUAV(UINT depth, DXGI_FORMAT format, ID3D11Texture3D** texture, ID3D11UnorderedAccessView** unorderedAccessView)
{
D3D11_UNORDERED_ACCESS_VIEW_DESC uavDesc;
ZeroMemory(&uavDesc, sizeof(uavDesc));
uavDesc.Format = format;
uavDesc.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE3D;
uavDesc.Texture3D.MipSlice = 0;
uavDesc.Texture3D.FirstWSlice = 0;
uavDesc.Texture3D.WSize = depth;
return renderer->CreateUnorderedAccessView(*texture, &uavDesc, unorderedAccessView);
}
HRESULT BaseComputeShader::CreateTexture3DSRV(DXGI_FORMAT format, ID3D11Texture3D** texture, ID3D11ShaderResourceView** shaderResourceView)
{
D3D11_SHADER_RESOURCE_VIEW_DESC srvDesc;
ZeroMemory(&srvDesc, sizeof(srvDesc));
srvDesc.Format = format;
srvDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE3D;
srvDesc.Texture3D.MostDetailedMip = 0;
srvDesc.Texture3D.MipLevels = 1;
return renderer->CreateShaderResourceView(*texture, &srvDesc, shaderResourceView);
}
And how I'm writing to it in the compute shader:
// The texture we're writing to
RWTexture3D<float> outputTexture : register(u0);
[numthreads(8, 8, 8)]
void main(uint3 DTid : SV_DispatchThreadID)
{
float noiseValue = 0.0f;
float value = 0.0f;
float localAmplitude = amplitude;
float localFrequency = frequency;
// Loop for the number of octaves, running the noise function as many times as desired (8 is usually sufficient)
for (int k = 0; k < octaves; k++)
{
noiseValue = noise(float3(DTid.x * localFrequency, DTid.y * localFrequency, DTid.z * localFrequency)) * localAmplitude;
value += noiseValue;
// Calculate a new amplitude based on the input persistence/gain value
// amplitudeLoop will get smaller as the number of layers (i.e. k) increases
localAmplitude *= persistence;
// Calculate a new frequency based on a lacunarity value of 2.0
// This gives us 2^k as the frequency
// i.e. Frequency at k = 4 will be f * 2^4 as we have looped 4 times
localFrequency *= 2.0f;
}
// Output value to 2D index in the texture provided by thread indexing
outputTexture[DTid.xyz] = value;
}
And finally, how I'm running the shader:
// Set the shader
deviceContext->CSSetShader(computeShader, nullptr, 0);
// Set the shader's buffers and views
deviceContext->CSSetConstantBuffers(0, 1, &cBuffer);
deviceContext->CSSetUnorderedAccessViews(0, 1, &textureUAV, nullptr);
// Launch the shader
deviceContext->Dispatch(512, 512, 512);
// Reset the shader now we're done
deviceContext->CSSetShader(nullptr, nullptr, 0);
// Reset the shader views
ID3D11UnorderedAccessView* ppUAViewnullptr[1] = { nullptr };
deviceContext->CSSetUnorderedAccessViews(0, 1, ppUAViewnullptr, nullptr);
// Create the shader resource view for access in other shaders
HRESULT result = CreateTexture3DSRV(DXGI_FORMAT_R32_FLOAT, &texture, &textureSRV);
if (result != S_OK)
{
MessageBox(NULL, L"Failed to create texture SRV after compute shader execution", L"Failed", MB_OK);
exit(0);
}
My bad, simple mistake. Compute shader threads are limited in number. In the compute shader you're limited to a total of 1024 threads, and the dispatch call cannot dispatch more than 65535 thread groups. The HLSL compiler will catch the former issue, but the Visual C++ compiler will not catch the latter issue.
If you create a texture of 512 * 512 * 512 (which seems what you are trying to achieve), your dispatch needs to be divided by groups:
deviceContext->Dispatch(512 / 8, 512 / 8, 512 / 8);
In your previous case, the dispatch was :
512*8 * 512*8 * 512*8 = 68719476736 units
Which very likely triggered the time out detection and crashes the driver
Also the limit of 65535 is per dimension, so in your case you are completely safe to run this.
And last one, you can create both shader resource view and unordered view right after creating your 3d texture (before the dispatch call).
This is generally recommended to avoid mixing context code and resource creation code.
On resource creation, your check is not valid either :
if (result != S_OK)
HRESULT success condition is >= 0
you can use the built in macro instead eg :
if (SUCCEEDED(result))
How can I get number of color attachments to currently bind FBO? I checked glGetInteger and glGetFramebufferAttachmentParameteriv but they don't have enum to get these values.
Untested, but this should do it:
GLint maxAtt = 0;
glGetIntegerv(GL_MAX_COLOR_ATTACHMENTS, &maxAtt);
int nAtt = 0;
for (int iAtt = 0; iAtt < maxAtt; ++iAtt) {
GLint objType = GL_NONE;
glGetFramebufferAttachmentParameteriv(
GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + iAtt,
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE, &objType);
if (objType != GL_NONE) {
++nAtt;
}
}
// nAtt is the number of color attachments.
You could do something similar using GL_FRAMEBUFFER_ATTACHMENT_OBJECT_NAME instead of GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE, and comparing the value to 0.
We are trying to use librocket http://librocket.com/ together with Ogre http://www.ogre3d.org/. They're both part of gamekit http://code.google.com/p/gamekit/ which I use for this project.
This all works fine together as long as I don't load an image with librocket. As soon as I do that the viewport on the iPad is not fullscreen anymore but small in the lower corner. Like this: http://uploads.undef.ch/machine/ipad.png
I can't make a connection between loading/rendering a texture and resizing of the viewport. And I can't find anything wrong with the RenderInterface. http://uploads.undef.ch/machine/RenderInterfaceOgre3D.cpp
Is there any OpenGLES command that could have an affect on the active viewport size?
This is the relevant code that loads an image and displays it:
// Called by Rocket when a texture is required by the library.
bool RenderInterfaceOgre3D::LoadTexture(Rocket::Core::TextureHandle& texture_handle, Rocket::Core::Vector2i& texture_dimensions, const Rocket::Core::String& source)
{
Ogre::TextureManager* texture_manager = Ogre::TextureManager::getSingletonPtr();
Ogre::TexturePtr ogre_texture = texture_manager->getByName(Ogre::String(source.CString()));
if (ogre_texture.isNull())
{
ogre_texture = texture_manager->load(Ogre::String(source.CString()),
DEFAULT_ROCKET_RESOURCE_GROUP,
Ogre::TEX_TYPE_2D,
0);
}
if (ogre_texture.isNull())
return false;
texture_dimensions.x = ogre_texture->getWidth();
texture_dimensions.y = ogre_texture->getHeight();
texture_handle = reinterpret_cast<Rocket::Core::TextureHandle>(new RocketOgre3DTexture(ogre_texture));
return true;
}
// Called by Rocket when it wants to render geometry that it does not wish to optimise.
void RenderInterfaceOgre3D::RenderGeometry(Rocket::Core::Vertex* vertices, int num_vertices, int* indices, int num_indices, Rocket::Core::TextureHandle texture, const Rocket::Core::Vector2f& translation)
{
// We've chosen to not support non-compiled geometry in the Ogre3D renderer.
// But if you want, you can uncomment this code, so borders will be shown.
/*
Rocket::Core::CompiledGeometryHandle gh = CompileGeometry(vertices, num_vertices, indices, num_indices, texture);
RenderCompiledGeometry(gh, translation);
ReleaseCompiledGeometry(gh);
*/
}
// Called by Rocket when it wants to compile geometry it believes will be static for the forseeable future.
Rocket::Core::CompiledGeometryHandle RenderInterfaceOgre3D::CompileGeometry(Rocket::Core::Vertex* vertices, int num_vertices, int* indices, int num_indices, Rocket::Core::TextureHandle texture)
{
RocketOgre3DCompiledGeometry* geometry = new RocketOgre3DCompiledGeometry();
geometry->texture = texture == NULL ? NULL : (RocketOgre3DTexture*) texture;
geometry->render_operation.vertexData = new Ogre::VertexData();
geometry->render_operation.vertexData->vertexStart = 0;
geometry->render_operation.vertexData->vertexCount = num_vertices;
geometry->render_operation.indexData = new Ogre::IndexData();
geometry->render_operation.indexData->indexStart = 0;
geometry->render_operation.indexData->indexCount = num_indices;
geometry->render_operation.operationType = Ogre::RenderOperation::OT_TRIANGLE_LIST;
// Set up the vertex declaration.
Ogre::VertexDeclaration* vertex_declaration = geometry->render_operation.vertexData->vertexDeclaration;
size_t element_offset = 0;
vertex_declaration->addElement(0, element_offset, Ogre::VET_FLOAT3, Ogre::VES_POSITION);
element_offset += Ogre::VertexElement::getTypeSize(Ogre::VET_FLOAT3);
vertex_declaration->addElement(0, element_offset, Ogre::VET_COLOUR, Ogre::VES_DIFFUSE);
element_offset += Ogre::VertexElement::getTypeSize(Ogre::VET_COLOUR);
vertex_declaration->addElement(0, element_offset, Ogre::VET_FLOAT2, Ogre::VES_TEXTURE_COORDINATES);
#if GK_PLATFORM == GK_PLATFORM_APPLE_IOS
// Create the vertex buffer.
Ogre::HardwareVertexBufferSharedPtr vertex_buffer = Ogre::HardwareBufferManager::getSingleton().createVertexBuffer(vertex_declaration->getVertexSize(0), num_vertices, Ogre::HardwareBuffer::HBU_DYNAMIC_WRITE_ONLY_DISCARDABLE,true);
geometry->render_operation.vertexData->vertexBufferBinding->setBinding(0, vertex_buffer);
// Fill the vertex buffer.
RocketOgre3DVertex* ogre_vertices = (RocketOgre3DVertex*) vertex_buffer->lock(0, vertex_buffer->getSizeInBytes(), Ogre::HardwareBuffer::HBL_DISCARD);
#else
Ogre::HardwareVertexBufferSharedPtr vertex_buffer = Ogre::HardwareBufferManager::getSingleton().createVertexBuffer(vertex_declaration->getVertexSize(0), num_vertices, Ogre::HardwareBuffer::HBU_STATIC_WRITE_ONLY);
geometry->render_operation.vertexData->vertexBufferBinding->setBinding(0, vertex_buffer);
// Fill the vertex buffer.
RocketOgre3DVertex* ogre_vertices = (RocketOgre3DVertex*) vertex_buffer->lock(0, vertex_buffer->getSizeInBytes(), Ogre::HardwareBuffer::HBL_NORMAL);
#endif
for (int i = 0; i < num_vertices; ++i)
{
ogre_vertices[i].x = vertices[i].position.x;
ogre_vertices[i].y = vertices[i].position.y;
ogre_vertices[i].z = 0;
Ogre::ColourValue diffuse(vertices[i].colour.red / 255.0f, vertices[i].colour.green / 255.0f, vertices[i].colour.blue / 255.0f, vertices[i].colour.alpha / 255.0f);
render_system->convertColourValue(diffuse, &ogre_vertices[i].diffuse);
ogre_vertices[i].u = vertices[i].tex_coord[0];
ogre_vertices[i].v = vertices[i].tex_coord[1];
}
vertex_buffer->unlock();
#if GK_PLATFORM == GK_PLATFORM_APPLE_IOS
// Create the index buffer.
Ogre::HardwareIndexBufferSharedPtr index_buffer = Ogre::HardwareBufferManager::getSingleton().createIndexBuffer(Ogre::HardwareIndexBuffer::IT_16BIT, num_indices, Ogre::HardwareBuffer::HBU_STATIC_WRITE_ONLY);
geometry->render_operation.indexData->indexBuffer = index_buffer;
geometry->render_operation.useIndexes = true;
#else
Ogre::HardwareIndexBufferSharedPtr index_buffer = Ogre::HardwareBufferManager::getSingleton().createIndexBuffer(Ogre::HardwareIndexBuffer::IT_32BIT, num_indices, Ogre::HardwareBuffer::HBU_STATIC_WRITE_ONLY);
geometry->render_operation.indexData->indexBuffer = index_buffer;
geometry->render_operation.useIndexes = true;
#endif
// Fill the index buffer.
unsigned short * ogre_indices = (unsigned short*)index_buffer->lock(0, index_buffer->getSizeInBytes(), Ogre::HardwareBuffer::HBL_NORMAL);
#if GK_PLATFORM == GK_PLATFORM_APPLE_IOS
//unsigned short short_indices[num_indices];
for(int i=0;i<num_indices;i++)
ogre_indices[i] = indices[i];
//memcpy(ogre_indices, short_indices, sizeof(unsigned short) * num_indices);
#else
memcpy(ogre_indices, indices, sizeof(unsigned int) * num_indices);
#endif
index_buffer->unlock();
return reinterpret_cast<Rocket::Core::CompiledGeometryHandle>(geometry);
}
// Called by Rocket when it wants to render application-compiled geometry.
void RenderInterfaceOgre3D::RenderCompiledGeometry(Rocket::Core::CompiledGeometryHandle geometry, const Rocket::Core::Vector2f& translation)
{
Ogre::Matrix4 transform;
transform.makeTrans(translation.x, translation.y, 0);
render_system->_setWorldMatrix(transform);
render_system = Ogre::Root::getSingleton().getRenderSystem();
RocketOgre3DCompiledGeometry* ogre3d_geometry = (RocketOgre3DCompiledGeometry*) geometry;
if (ogre3d_geometry->texture != NULL)
{
render_system->_setTexture(0, true, ogre3d_geometry->texture->texture);
// Ogre can change the blending modes when textures are disabled - so in case the last render had no texture,
// we need to re-specify them.
render_system->_setTextureBlendMode(0, colour_blend_mode);
render_system->_setTextureBlendMode(0, alpha_blend_mode);
}
else
render_system->_disableTextureUnit(0);
render_system->_render(ogre3d_geometry->render_operation);
}
I am trying to do some Image processing on a UIImage using some EAGLView code from the GLImageProcessing sample from Apple. The sample code is configured to perform processing to a pre-installed image (Image.png). I am trying to modify the code so that it will accept a UIImage (or at least CGImage data) of my choice and process that instead. Problem is, the texture-loader method loadTexture() (below) seems to accept only C structures as parameters, and I have not been able to get it to accept a UIImage* or a CGImage as a parameter. Can someone give me a clue as how to bridge the gap so that I can pass my UIImage into the C-method?
------------ from Texture.h ---------------
#ifndef TEXTURE_H
#define TEXTURE_H
#include "Imaging.h"
void loadTexture(const char *name, Image *img, RendererInfo *renderer);
#endif /* TEXTURE_H */
----------------from Texture.m---------------------
#import <UIKit/UIKit.h>
#import "Texture.h"
static unsigned int nextPOT(unsigned int x)
{
x = x - 1;
x = x | (x >> 1);
x = x | (x >> 2);
x = x | (x >> 4);
x = x | (x >> 8);
x = x | (x >>16);
return x + 1;
}
// This is not a fully generalized image loader. It is an example of how to use
// CGImage to directly access decompressed image data. Only the most commonly
// used image formats are supported. It will be necessary to expand this code
// to account for other uses, for example cubemaps or compressed textures.
//
// If the image format is supported, this loader will Gen a OpenGL 2D texture object
// and upload texels from it, padding to POT if needed. For image processing purposes,
// border pixels are also replicated here to ensure proper filtering during e.g. blur.
//
// The caller of this function is responsible for deleting the GL texture object.
void loadTexture(const char *name, Image *img, RendererInfo *renderer)
{
GLuint texID = 0, components, x, y;
GLuint imgWide, imgHigh; // Real image size
GLuint rowBytes, rowPixels; // Image size padded by CGImage
GLuint POTWide, POTHigh; // Image size padded to next power of two
CGBitmapInfo info; // CGImage component layout info
CGColorSpaceModel colormodel; // CGImage colormodel (RGB, CMYK, paletted, etc)
GLenum internal, format;
GLubyte *pixels, *temp = NULL;
CGImageRef CGImage = [UIImage imageNamed:[NSString stringWithUTF8String:name]].CGImage;
rt_assert(CGImage);
if (!CGImage)
return;
// Parse CGImage info
info = CGImageGetBitmapInfo(CGImage); // CGImage may return pixels in RGBA, BGRA, or ARGB order
colormodel = CGColorSpaceGetModel(CGImageGetColorSpace(CGImage));
size_t bpp = CGImageGetBitsPerPixel(CGImage);
if (bpp < 8 || bpp > 32 || (colormodel != kCGColorSpaceModelMonochrome && colormodel != kCGColorSpaceModelRGB))
{
// This loader does not support all possible CGImage types, such as paletted images
CGImageRelease(CGImage);
return;
}
components = bpp>>3;
rowBytes = CGImageGetBytesPerRow(CGImage); // CGImage may pad rows
rowPixels = rowBytes / components;
imgWide = CGImageGetWidth(CGImage);
imgHigh = CGImageGetHeight(CGImage);
img->wide = rowPixels;
img->high = imgHigh;
img->s = (float)imgWide / rowPixels;
img->t = 1.0;
// Choose OpenGL format
switch(bpp)
{
default:
rt_assert(0 && "Unknown CGImage bpp");
case 32:
{
internal = GL_RGBA;
switch(info & kCGBitmapAlphaInfoMask)
{
case kCGImageAlphaPremultipliedFirst:
case kCGImageAlphaFirst:
case kCGImageAlphaNoneSkipFirst:
format = GL_BGRA;
break;
default:
format = GL_RGBA;
}
break;
}
case 24:
internal = format = GL_RGB;
break;
case 16:
internal = format = GL_LUMINANCE_ALPHA;
break;
case 8:
internal = format = GL_LUMINANCE;
break;
}
// Get a pointer to the uncompressed image data.
//
// This allows access to the original (possibly unpremultiplied) data, but any manipulation
// (such as scaling) has to be done manually. Contrast this with drawing the image
// into a CGBitmapContext, which allows scaling, but always forces premultiplication.
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(CGImage));
rt_assert(data);
pixels = (GLubyte *)CFDataGetBytePtr(data);
rt_assert(pixels);
// If the CGImage component layout isn't compatible with OpenGL, fix it.
// On the device, CGImage will generally return BGRA or RGBA.
// On the simulator, CGImage may return ARGB, depending on the file format.
if (format == GL_BGRA)
{
uint32_t *p = (uint32_t *)pixels;
int i, num = img->wide * img->high;
if ((info & kCGBitmapByteOrderMask) != kCGBitmapByteOrder32Host)
{
// Convert from ARGB to BGRA
for (i = 0; i < num; i++)
p[i] = (p[i] << 24) | ((p[i] & 0xFF00) << 8) | ((p[i] >> 8) & 0xFF00) | (p[i] >> 24);
}
// All current iPhoneOS devices support BGRA via an extension.
if (!renderer->extension[IMG_texture_format_BGRA8888])
{
format = GL_RGBA;
// Convert from BGRA to RGBA
for (i = 0; i < num; i++)
#if __LITTLE_ENDIAN__
p[i] = ((p[i] >> 16) & 0xFF) | (p[i] & 0xFF00FF00) | ((p[i] & 0xFF) << 16);
#else
p[i] = ((p[i] & 0xFF00) << 16) | (p[i] & 0xFF00FF) | ((p[i] >> 16) & 0xFF00);
#endif
}
}
// Determine if we need to pad this image to a power of two.
// There are multiple ways to deal with NPOT images on renderers that only support POT:
// 1) scale down the image to POT size. Loses quality.
// 2) pad up the image to POT size. Wastes memory.
// 3) slice the image into multiple POT textures. Requires more rendering logic.
//
// We are only dealing with a single image here, and pick 2) for simplicity.
//
// If you prefer 1), you can use CoreGraphics to scale the image into a CGBitmapContext.
POTWide = nextPOT(img->wide);
POTHigh = nextPOT(img->high);
if (!renderer->extension[APPLE_texture_2D_limited_npot] && (img->wide != POTWide || img->high != POTHigh))
{
GLuint dstBytes = POTWide * components;
GLubyte *temp = (GLubyte *)malloc(dstBytes * POTHigh);
for (y = 0; y < img->high; y++)
memcpy(&temp[y*dstBytes], &pixels[y*rowBytes], rowBytes);
img->s *= (float)img->wide/POTWide;
img->t *= (float)img->high/POTHigh;
img->wide = POTWide;
img->high = POTHigh;
pixels = temp;
rowBytes = dstBytes;
}
// For filters that sample texel neighborhoods (like blur), we must replicate
// the edge texels of the original input, to simulate CLAMP_TO_EDGE.
{
GLuint replicatew = MIN(MAX_FILTER_RADIUS, img->wide-imgWide);
GLuint replicateh = MIN(MAX_FILTER_RADIUS, img->high-imgHigh);
GLuint imgRow = imgWide * components;
for (y = 0; y < imgHigh; y++)
for (x = 0; x < replicatew; x++)
memcpy(&pixels[y*rowBytes+imgRow+x*components], &pixels[y*rowBytes+imgRow-components], components);
for (y = imgHigh; y < imgHigh+replicateh; y++)
memcpy(&pixels[y*rowBytes], &pixels[(imgHigh-1)*rowBytes], imgRow+replicatew*components);
}
if (img->wide <= renderer->maxTextureSize && img->high <= renderer->maxTextureSize)
{
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_2D, texID);
// Set filtering parameters appropriate for this application (image processing on screen-aligned quads.)
// Depending on your needs, you may prefer linear filtering, or mipmap generation.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, internal, img->wide, img->high, 0, format, GL_UNSIGNED_BYTE, pixels);
}
if (temp) free(temp);
CFRelease(data);
CGImageRelease(CGImage);
img->texID = texID;
}
Side Note: The above code is the original and unmodified sample code from Apple and does not generate any errors when compiled. However, when I try to modify the .h and .m to accept a UIImage* parameter (as below) the compiler generates the following error:"Error: expected declaration specifiers or "..." before UIImage"
----------Modified .h Code that generates the Compiler Error:-------------
void loadTexture(const char name, Image *img, RendererInfo *renderer, UIImage* newImage)
You are probably importing this .h into a .c somewhere. That tells the compiler to use C rather than Objective-C. UIKit.h (and it's many children) are in Objective-C and cannot be compiled by a C compiler.
You can rename all you .c files to .m, but what you really probably want is just to use CGImageRef and import CGImage.h. CoreGraphics is C-based. UIKit is Objective-C. There is no problem, if you want, for Texture.m to be in Objective-C. Just make sure that Texture.h is pure C. Alternatively (and I do this a lot with C++ code), you can make a Texture+C.h header that provides just the C-safe functions you want to expose. Import Texture.h in Objective-C code, and Texture+C.h in C code. Or name them the other way around if more convenient, with a Texture+ObjC.h.
It sounds like your file isn't importing the UIKit header.
WHy are you passing new image to loadTexture, instead of using loadTexture's own UImage loading to open the new image you want?
loadTexture:
void loadTexture(const char *name, Image *img, RendererInfo *renderer)
{
GLuint texID = 0, components, x, y;
GLuint imgWide, imgHigh; // Real image size
GLuint rowBytes, rowPixels; // Image size padded by CGImage
GLuint POTWide, POTHigh; // Image size padded to next power of two
CGBitmapInfo info; // CGImage component layout info
CGColorSpaceModel colormodel; // CGImage colormodel (RGB, CMYK, paletted, etc)
GLenum internal, format;
GLubyte *pixels, *temp = NULL;
[Why not have the following fetch your UIImage?]
CGImageRef CGImage = [UIImage imageNamed:[NSString stringWithUTF8String:name]].CGImage;
rt_assert(CGImage);
if (!CGImage)
return;