How does OpenGL handle multiple clear masks? - opengl

I've been working with OpenGL for some time now, and while I understand how to use it I'm quite interested in how it handles and understand multiple masks. Example:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
//How does it understand that I want to clear the
//color buffer and the depth buffer?
At first I thought they might be using static variables like so:
GL_COLOR_AND_DEPTH_BUFFER_BIT = GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT;
But then I realized that they would need hundreds of these to have every single possible combination, which seems silly. So how do they interpret the result and find out what two masks I wish to clear?

As Colonel Thirty Two pointed out, a GL implementation can use bit tests:
void GLAPIENTRY
_mesa_Clear( GLbitfield mask )
{
...
/* Accumulation buffers were removed in core contexts, and they never
* existed in OpenGL ES.
*/
if ((mask & GL_ACCUM_BUFFER_BIT) != 0
&& (ctx->API == API_OPENGL_CORE || _mesa_is_gles(ctx))) {
_mesa_error( ctx, GL_INVALID_VALUE, "glClear(GL_ACCUM_BUFFER_BIT)");
return;
}
...
if (ctx->RenderMode == GL_RENDER) {
...
if ((mask & GL_DEPTH_BUFFER_BIT)
&& ctx->DrawBuffer->Visual.haveDepthBuffer) {
bufferMask |= BUFFER_BIT_DEPTH;
}
if ((mask & GL_STENCIL_BUFFER_BIT)
&& ctx->DrawBuffer->Visual.haveStencilBuffer) {
bufferMask |= BUFFER_BIT_STENCIL;
}
if ((mask & GL_ACCUM_BUFFER_BIT)
&& ctx->DrawBuffer->Visual.haveAccumBuffer) {
bufferMask |= BUFFER_BIT_ACCUM;
}
assert(ctx->Driver.Clear);
ctx->Driver.Clear(ctx, bufferMask);
}
}
See the mask & GL_DEPTH_BUFFER_BIT-type tests.
#define GL_DEPTH_BUFFER_BIT 0x00000100 // == 0b000000100000000
#define GL_ACCUM_BUFFER_BIT 0x00000200 // == 0b000001000000000
#define GL_STENCIL_BUFFER_BIT 0x00000400 // == 0b000010000000000
#define GL_COLOR_BUFFER_BIT 0x00004000 // == 0b100000000000000

Related

UE4 capture frame using ID3D11Texture2D and convert to R8G8B8 bitmap

I'm working on a streaming prototype using UE4.
My goal here (in this post) is solely about capturing frames and saving one as a bitmap, just to visually ensure frames are correctly captured.
I'm currently capturing frames converting the backbuffer to a ID3D11Texture2D then mapping it.
Note : I tried the ReadSurfaceData approach in the render thread, but it didn't perform well at all regarding performances (FPS went down to 15 and I'd like to capture at 60 FPS), whereas the DirectX texture mapping from the backbuffer currently takes 1 to 3 milliseconds.
When debugging, I can see the D3D11_TEXTURE2D_DESC's format is DXGI_FORMAT_R10G10B10A2_UNORM, so red/green/blues are stored on 10 bits each, and alpha on 2 bits.
My questions :
How to convert the texture's data (using the D3D11_MAPPED_SUBRESOURCE pData pointer) to a R8G8B8(A8), that is, 8 bit per color (a R8G8B8 without the alpha would also be fine for me there) ?
Also, am I doing anything wrong about capturing the frame ?
What I've tried :
All the following code is executed in a callback function registered to OnBackBufferReadyToPresent (code below).
void* NativeResource = BackBuffer->GetNativeResource();
if (NativeResource == nullptr)
{
UE_LOG(LogTemp, Error, TEXT("Couldn't retrieve native resource"));
return;
}
ID3D11Texture2D* BackBufferTexture = static_cast<ID3D11Texture2D*>(NativeResource);
D3D11_TEXTURE2D_DESC BackBufferTextureDesc;
BackBufferTexture->GetDesc(&BackBufferTextureDesc);
// Get the device context
ID3D11Device* d3dDevice;
BackBufferTexture->GetDevice(&d3dDevice);
ID3D11DeviceContext* d3dContext;
d3dDevice->GetImmediateContext(&d3dContext);
// Staging resource
ID3D11Texture2D* StagingTexture;
D3D11_TEXTURE2D_DESC StagingTextureDesc = BackBufferTextureDesc;
StagingTextureDesc.Usage = D3D11_USAGE_STAGING;
StagingTextureDesc.BindFlags = 0;
StagingTextureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
StagingTextureDesc.MiscFlags = 0;
HRESULT hr = d3dDevice->CreateTexture2D(&StagingTextureDesc, nullptr, &StagingTexture);
if (FAILED(hr))
{
UE_LOG(LogTemp, Error, TEXT("CreateTexture failed"));
}
// Copy the texture to the staging resource
d3dContext->CopyResource(StagingTexture, BackBufferTexture);
// Map the staging resource
D3D11_MAPPED_SUBRESOURCE mapInfo;
hr = d3dContext->Map(
StagingTexture,
0,
D3D11_MAP_READ,
0,
&mapInfo);
if (FAILED(hr))
{
UE_LOG(LogTemp, Error, TEXT("Map failed"));
}
// See https://dev.to/muiz6/c-how-to-write-a-bitmap-image-from-scratch-1k6m for the struct definitions & the initialization of bmpHeader and bmpInfoHeader
// I didn't copy that code here to avoid overloading this post, as it's identical to the article's code
// Just making clear the reassigned values below
bmpHeader.sizeOfBitmapFile = 54 + StagingTextureDesc.Width * StagingTextureDesc.Height * 4;
bmpInfoHeader.width = StagingTextureDesc.Width;
bmpInfoHeader.height = StagingTextureDesc.Height;
std::ofstream fout("output.bmp", std::ios::binary);
fout.write((char*)&bmpHeader, 14);
fout.write((char*)&bmpInfoHeader, 40);
// TODO : convert to R8G8B8 (see below for my attempt at this)
fout.close();
StagingTexture->Release();
d3dContext->Unmap(StagingTexture, 0);
d3dContext->Release();
d3dDevice->Release();
BackBufferTexture->Release();
(As mentioned in the code comments, I followed this article about the BMP headers for saving the bitmap to a file)
Texture data
One thing I'm concerned about is the retrieved data with this method.
I used a temporary array to check with the debugger what's inside.
// Just noted which width and height had the texture and hardcoded it here to allocate the right size
uint32_t data[1936 * 1056];
// Multiply by 4 as there are 4 bytes (32 bits) per pixel
memcpy(data, mapInfo.pData, StagingTextureDesc.Width * StagingTextureDesc.Height * 4);
Turns out the 1935 first uint32 in this array all contain the same value ; 3595933029. And after that, the same values are often seen hundred times in a row.
This makes me think the frame isn't captured as it should, because the UE4 editor's window doesn't have the exact same color on its first row all along (whether it's top or bottom).
R10G10B10A2 to R8G8B8(A8)
So I tried to guess how to convert from R10G10B10A2 to R8G8B8. I started from this value that appears 1935 times in a row at the beginning of the data buffer : 3595933029.
When I color pick an editor's window screenshot (using the Windows tool, which gets me an image with the exact same dimensions as the DirectX texture, that is 1936x1056), I get the following different colors:
R=56, G=57, B=52 (top left & bottom left)
R=0, G=0, B=0 (top right)
R=46, G=40, B=72 (bottom right - it overlaps the task bar, thus the color)
So I tried to manually convert the color to check if it matches any of those I color picked.
I thought about bit shifting to simply compare the values
3595933029 (value in retrieved buffer) in binary : 11010110010101011001010101100101
Can already see the pattern : 11 followed 3 times by the 10-bit value 0101100101, and none of the picked colors follow this (except the black corner, which would be only made of zeros though)
Anyway, assuming RRRRRRRRRR GGGGGGGGGG BBBBBBBBBB AA order (ditched bits are marked with an x) :
11010110xx01010110xx01010110xxxx
R=214, G=86, B=86 : doesn't match
Assuming AA RRRRRRRRRR GGGGGGGGGG BBBBBBBBBB :
xx01011001xx01011001xx01011001xx
R=89, G=89, B=89 : doesn't match
If that can help, here's the editor window that should be captured (it really is a Third person template, didn't add anything to it except this capture code)
Here's the generated bitmap when shifting bits :
Code to generate bitmap's pixels data :
struct Pixel {
uint8_t blue = 0;
uint8_t green = 0;
uint8_t red = 0;
} pixel;
uint32_t* pointer = (uint32_t*)mapInfo.pData;
size_t numberOfPixels = bmpInfoHeader.width * bmpInfoHeader.height;
for (int i = 0; i < numberOfPixels; i++) {
uint32_t value = *pointer;
// Ditch the color's 2 last bits, keep the 8 first
pixel.blue = value >> 2;
pixel.green = value >> 12;
pixel.red = value >> 22;
++pointer;
fout.write((char*)&pixel, 3);
}
It somewhat seems similar in the present colors, however that doesn't look at all like the editor.
What am I missing ?
First of all, you are assuming that the mapInfo.RowPitch is exactly StagicngTextureDesc.Width * 4. This is often not true. When copying to/from Direct3D resources, you need to do 'row-by-row' copies. Also, allocating 2 MBytes on the stack is not good practice.
#include <cstdint>
#include <memory>
// Assumes our staging texture is 4 bytes-per-pixel
// Allocate temporary memory
auto data = std::unique_ptr<uint32_t[]>(
new uint32_t[StagingTextureDesc.Width * StagingTextureDesc.Height]);
auto src = static_cast<uint8_t*>(mapInfo.pData);
uint32_t* dest = data.get();
for(UINT y = 0; y < StagingTextureDesc.Height; ++y)
{
// Multiply by 4 as there are 4 bytes (32 bits) per pixel
memcpy(dest, src, StagingTextureDesc.Width * sizeof(uint32_t));
src += mapInfo.RowPitch;
dest += StagingTextureDesc.Width;
}
For C++11, using std::unique_ptr ensures the memory is eventually released automatically. You can transfer ownership of the memory to something else with uint32_t* ptr = data.release(). See cppreference.
With C++14, the better way to write the allocation is: auto data = std::make_unique<uint32_t[]>(StagingTextureDesc.Width * StagingTextureDesc.Height);. This assumes you are fine with a C++ exception being thrown for out-of-memory.
If you want to return an error code for out-of-memory instead of a C++ exception, use: auto data = std::unique_ptr<uint32_t[]>(new (std::nothrow) uint32_t[StagingTextureDesc.Width * StagingTextureDesc.Height]); if (!data) // return error
Converting 10:10:10:2 content to 8:8:8:8 content can be done efficiently on the CPU with bit-shifting.
The tricky bit is dealing with the up-scaling of the 2-bit alpha to 8-bits. For example, you want the Alpha of 11 to map to 255, not 192.
Here's a replacement for the loop above
// Assumes our staging texture is DXGI_FORMAT_R10G10B10A2_UNORM
for(UINT y = 0; y < StagingTextureDesc.Height; ++y)
{
auto sptr = reinterpret_cast<uint32_t*>(src);
for(UINT x = 0; x < StagingTextureDesc.Width; ++x)
{
uint32_t t = *(sptr++);
uint32_t r = (t & 0x000003ff) >> 2;
uint32_t g = (t & 0x000ffc00) >> 12;
uint32_t b = (t & 0x3ff00000) >> 22;
// Upscale alpha
// 11xxxxxx -> 11111111 (255)
// 10xxxxxx -> 10101010 (170)
// 01xxxxxx -> 01010101 (85)
// 00xxxxxx -> 00000000 (0)
t &= 0xc0000000;
uint32_t a = (t >> 24) | (t >> 26) | (t >> 28) | (t >> 30);
// Convert to DXGI_FORMAT_R8G8B8A8_UNORM
*(dest++) = r | (g << 8) | (b << 16) | (a << 24);
}
src += mapInfo.RowPitch;
}
Of course we can combine the shifting operations since we move them down and then back up in the previous loop. We do need to update the masks to remove the bits that are normally shifted off by the full shifts. This replaces the inner body of the loop above:
// Convert from 10:10:10:2 to 8:8:8:8
uint32_t t = *(sptr++);
uint32_t r = (t & 0x000003fc) >> 2;
uint32_t g = (t & 0x000ff000) >> 4;
uint32_t b = (t & 0x3fc00000) >> 6;
t &= 0xc0000000;
uint32_t a = t | (t >> 2) | (t >> 4) | (t >> 6);
*(dest++) = r | g | b | a;
Any time you reduce the bit-depth you will introduce error. Techniques like ordered dithering and error-diffusion dithering are commonly used in pixels conversions of this nature. These introduce a bit of noise to the image to reduce the visual impact of the lost low bits.
For examples of conversions for all DXGI_FORMAT types, see DirectXTex which makes use of DirectXMath for all the various packed vector types. DirectXTex also implements both 4x4 ordered dithering and Floyd-Steinberg error-diffusion dithering when reducing bit-depth.

Convert S3TC / DXTn data to QImage

I have joined a project to simplify legacy graphics code, and would be grateful for advice on this data conversion problem.
The input is compressed textures in DXT1, DXT3, DXT5 formats. The data is in main memory, not graphics card memory. The input does not have the standard DDS_HEADER, only the compressed pixel data. The desired output is QImages.
Using existing metadata, we can construct a DDS_HEADER, write the texture to a temporary file, then load the QImage from that file. However we want to avoid this solution and work with the original data directly as there are many, many instances of it.
My research has uncovered no Qt functions to perform this conversion directly. So far, the most promising sounding approach is to use our existing OpenGL context to draw the texture to a QOpenGLFrameBufferObject. This class has a toImage() member function. However, I don't understand how to construct a usable texture object out of the raw data and draw that to the frame buffer.
Edit: A clarification, based on Scheff's excellent answer. I am aware that the textures can be manually decompressed and a QImage loaded from the result. I would prefer to avoid this step and use library functions if possible, for greatest simplicity. QOpenGLTexture has a member function setCompressedData that might be used.
Thanks for any suggestions.
Reading this question, I became curious and learnt about S3 Texture Compression. Funny enough, although I heart about compressed textures in the past, I always assumed that it would be something complicated like LZW Algorithm or JPEG Compression, and never digged deeper. But, today I realized I was totally wrong.
S3 Texture Compression is actually much simpler though it can achieve quite impressive compression ratios.
Nice introductions can be easily found by google. The question already mentions MSDN. Additionally, I found some other sites which gave me a quite good introduction into this topic in least time:
khronos.org: S3 Texture Compression (which I consider as authoritative source)
FSDeveloper.com: DXT compression explained
Joost's Dev Blog: Texture formats for 2D games, part 3: DXT and PVRTC
MSDN: Programming Guide for DDS
Brandon Jones webgl-texture-utils on GitHub
nv_dds on GitHub DDS image loader for OpenGL/ OpenGL ES2.
Concerning the GitHub projects, it seems that somebodies already did the work. I scanned a little bit the code by eyes but, finally, I'm not sure whether they support all possible features. However, I "borrowed" the test images from Brandon Jones site, so, it's fair enough to mention it.
So, this is my actual answer: An alternative approach could be to decode of texture to the QImage on CPU side completely.
As a proof of concept, I leave the result of my code fiddling I did this morning – my trial to transform the linked descriptions into working C++ code – DXT1-QImage.cc:
#include <cstdint>
#include <fstream>
#include <QtWidgets>
#ifndef _WIN32
typedef quint32 DWORD;
#endif // _WIN32
/* borrowed from:
* https://msdn.microsoft.com/en-us/library/windows/desktop/bb943984(v=vs.85).aspx
*/
struct DDS_PIXELFORMAT {
DWORD dwSize;
DWORD dwFlags;
DWORD dwFourCC;
DWORD dwRGBBitCount;
DWORD dwRBitMask;
DWORD dwGBitMask;
DWORD dwBBitMask;
DWORD dwABitMask;
};
/* borrowed from:
* https://msdn.microsoft.com/en-us/library/windows/desktop/bb943982(v=vs.85).aspx
*/
struct DDS_HEADER {
DWORD dwSize;
DWORD dwFlags;
DWORD dwHeight;
DWORD dwWidth;
DWORD dwPitchOrLinearSize;
DWORD dwDepth;
DWORD dwMipMapCount;
DWORD dwReserved1[11];
DDS_PIXELFORMAT ddspf;
DWORD dwCaps;
DWORD dwCaps2;
DWORD dwCaps3;
DWORD dwCaps4;
DWORD dwReserved2;
};
inline quint32 stretch(std::uint16_t color)
{
return 0xff000000u
| (quint32)(color & 0x001f) << 3 // >> 0 << 3 << 0
| (quint32)(color & 0x07e0) << 5 // >> 5 << 2 << 8
| (quint32)(color & 0xf800) << 8;// >> 11 << 3 << 16
}
void makeLUT(
quint32 lut[4], std::uint16_t color0, std::uint16_t color1)
{
const quint32 argb0 = stretch(color0);
const quint32 argb1 = stretch(color1);
lut[0] = argb0;
lut[1] = argb1;
if (color0 > color1) {
lut[2] = 0xff000000u
| ((((argb0 & 0xff0000) >> 15) + ((argb1 & 0xff0000) >> 16)) / 3) << 16
| ((((argb0 & 0x00ff00) >> 7) + ((argb1 & 0x00ff00) >> 8)) / 3) << 8
| ((((argb0 & 0x0000ff) << 1) + ((argb1 & 0x0000ff) >> 0)) / 3) << 0;
lut[3] = 0xff000000u
| ((((argb1 & 0xff0000) >> 15) + ((argb0 & 0xff0000) >> 16)) / 3) << 16
| ((((argb1 & 0x00ff00) >> 7) + ((argb0 & 0x00ff00) >> 8)) / 3) << 8
| ((((argb1 & 0x0000ff) << 1) + ((argb0 & 0x0000ff) >> 0)) / 3) << 0;
} else {
lut[2] = 0xff000000u
| ((((argb0 & 0xff0000) >> 16) + ((argb1 & 0xff0000) >> 16)) / 2) << 16
| ((((argb0 & 0x00ff00) >> 8) + ((argb1 & 0x00ff00) >> 8)) / 2) << 8
| ((((argb0 & 0x0000ff) >> 0) + ((argb1 & 0x0000ff) >> 0)) / 2) << 0;
lut[3] = 0xff000000u;
}
}
const std::uint8_t* uncompress(
const std::uint8_t *data, QImage &qImg, int x, int y)
{
// get color 0 and color 1
std::uint16_t color0 = data[0] | data[1] << 8;
std::uint16_t color1 = data[2] | data[3] << 8;
data += 4;
quint32 lut[4]; makeLUT(lut, color0, color1);
// decode 4 x 4 pixels
for (int i = 0; i < 4; ++i) {
qImg.setPixel(x + 0, y + i, lut[data[i] >> 0 & 3]);
qImg.setPixel(x + 1, y + i, lut[data[i] >> 2 & 3]);
qImg.setPixel(x + 2, y + i, lut[data[i] >> 4 & 3]);
qImg.setPixel(x + 3, y + i, lut[data[i] >> 6 & 3]);
}
data += 4;
// done
return data;
}
QImage loadDXT1(const char *file)
{
std::ifstream fIn(file, std::ios::in | std::ios::binary);
// read magic code
enum { sizeMagic = 4 }; char magic[sizeMagic];
if (!fIn.read(magic, sizeMagic)) {
return QImage(); // ERROR: read failed
}
if (strncmp(magic, "DDS ", sizeMagic) != 0) {
return QImage(); // ERROR: wrong format (wrong magic code)
}
// read header
DDS_HEADER header;
if (!fIn.read((char*)&header, sizeof header)) {
return QImage(); // ERROR: read failed
}
qDebug() << "header size:" << sizeof header;
// get raw data (size computation unclear)
const unsigned w = (header.dwWidth + 3) / 4;
const unsigned h = (header.dwHeight + 3) / 4;
std::vector<std::uint8_t> data(w * h * 8);
qDebug() << "data size:" << data.size();
if (!fIn.read((char*)data.data(), data.size())) {
return QImage(); // ERROR: read failed
}
// decode raw data
QImage qImg(header.dwWidth, header.dwHeight, QImage::Format_ARGB32);
const std::uint8_t *pData = data.data();
for (int y = 0; y < (int)header.dwHeight; y += 4) {
for (int x = 0; x < (int)header.dwWidth; x += 4) {
pData = uncompress(pData, qImg, x, y);
}
}
qDebug() << "processed image size:" << fIn.tellg();
// done
return qImg;
}
int main(int argc, char **argv)
{
qDebug() << "Qt Version:" << QT_VERSION_STR;
QApplication app(argc, argv);
// build QImage
QImage qImg = loadDXT1("test-dxt1.dds");
// setup GUI
QMainWindow qWin;
QLabel qLblImg;
qLblImg.setPixmap(QPixmap::fromImage(qImg));
qWin.setCentralWidget(&qLblImg);
qWin.show();
// exec. application
return app.exec();
}
I did the development and debugging on VS2013. To check out, whether it is portable to Linux the best I could do was to compile and test on cygwin as well.
For this, I wrote a QMake file DXT1-QImage.pro:
SOURCES = DXT1-QImage.cc
QT += widgets
to compile and run this in bash:
$ qmake-qt5 DXT1-QImage.pro
$ make
g++ -c -fno-keep-inline-dllexport -D_GNU_SOURCE -pipe -O2 -Wall -W -D_REENTRANT -DQT_NO_DEBUG -DQT_WIDGETS_LIB -DQT_GUI_LIB -DQT_CORE_LIB -I. -isystem /usr/include/qt5 -isystem /usr/include/qt5/QtWidgets -isystem /usr/include/qt5/QtGui -isystem /usr/include/qt5/QtCore -I. -I/usr/lib/qt5/mkspecs/cygwin-g++ -o DXT1-QImage.o DXT1-QImage.cc
g++ -o DXT1-QImage.exe DXT1-QImage.o -lQt5Widgets -lQt5Gui -lQt5Core -lGL -lpthread
$ ./DXT1-QImage
Qt Version: 5.9.2
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-ds32737'
header size: 124
data size: 131072
processed image size: 131200
QXcbShmImage: shmget() failed (88: Function not implemented) for size 1048576 (512x512)
For the test, I used the sample file test-dxt1.dds.
And this is what came out:
For comparison, the original image:
Notes:
I'm implemented a file loader although the questioner explicitly mentioned that he wants to convert raw image data which is already in memory. I had to do this as I didn't see any other way to get (valid) DXT1 raw data into memory on my side (to justify afterwards if it works or not).
My debug output shows that my loader reads 131200 bytes (i.e. 4 bytes magic code, 124 bytes header, and 131072 bytes compressed image data).
In opposition to this, the file test-dxt1.dds contains 174904 bytes. So, there is additional data in file but I do not (yet) know for what it is good for.
After I got the feed-back, that I didn't match the expectations of the questioner in my first answer, I modified my sources to draw the DXT1 raw-data into an OpenGL texture.
So, this answer addresses specifically this part of the question:
However, I don't understand how to construct a usable texture object out of the raw data and draw that to the frame buffer.
The modifications are strongly "inspired" by the Qt docs Cube OpenGL ES 2.0 example.
The essential part is how the QOpenGLTexture is constructed out of the DXT1 raw data:
_pQGLTex = new QOpenGLTexture(QOpenGLTexture::Target2D);
_pQGLTex->setFormat(QOpenGLTexture::RGB_DXT1);
_pQGLTex->setSize(_img.w, _img.h);
_pQGLTex->allocateStorage(QOpenGLTexture::RGBA, QOpenGLTexture::UInt8);
_pQGLTex->setCompressedData((int)_img.data.size(), _img.data.data());
_pQGLTex->setMinificationFilter(QOpenGLTexture::Nearest);
_pQGLTex->setMagnificationFilter(QOpenGLTexture::Linear);
_pQGLTex->setWrapMode(QOpenGLTexture::ClampToEdge);
And, this is the complete sample code DXT1-QTexture-QImage.cc:
#include <cstdint>
#include <fstream>
#include <QtWidgets>
#include <QOpenGLFunctions_4_0_Core>
#ifndef _WIN32
typedef quint32 DWORD;
#endif // _WIN32
/* borrowed from:
* https://msdn.microsoft.com/en-us/library/windows/desktop/bb943984(v=vs.85).aspx
*/
struct DDS_PIXELFORMAT {
DWORD dwSize;
DWORD dwFlags;
DWORD dwFourCC;
DWORD dwRGBBitCount;
DWORD dwRBitMask;
DWORD dwGBitMask;
DWORD dwBBitMask;
DWORD dwABitMask;
};
/* borrowed from:
* https://msdn.microsoft.com/en-us/library/windows/desktop/bb943982(v=vs.85).aspx
*/
struct DDS_HEADER {
DWORD dwSize;
DWORD dwFlags;
DWORD dwHeight;
DWORD dwWidth;
DWORD dwPitchOrLinearSize;
DWORD dwDepth;
DWORD dwMipMapCount;
DWORD dwReserved1[11];
DDS_PIXELFORMAT ddspf;
DWORD dwCaps;
DWORD dwCaps2;
DWORD dwCaps3;
DWORD dwCaps4;
DWORD dwReserved2;
};
struct Image {
int w, h;
std::vector<std::uint8_t> data;
explicit Image(int w = 0, int h = 0):
w(w), h(h), data(((w + 3) / 4) * ((h + 3) / 4) * 8)
{ }
~Image() = default;
Image(const Image&) = delete;
Image& operator=(const Image&) = delete;
Image(Image &&img): w(img.w), h(img.h), data(move(img.data)) { }
};
Image loadDXT1(const char *file)
{
std::ifstream fIn(file, std::ios::in | std::ios::binary);
// read magic code
enum { sizeMagic = 4 }; char magic[sizeMagic];
if (!fIn.read(magic, sizeMagic)) {
return Image(); // ERROR: read failed
}
if (strncmp(magic, "DDS ", sizeMagic) != 0) {
return Image(); // ERROR: wrong format (wrong magic code)
}
// read header
DDS_HEADER header;
if (!fIn.read((char*)&header, sizeof header)) {
return Image(); // ERROR: read failed
}
qDebug() << "header size:" << sizeof header;
// get raw data (size computation unclear)
Image img(header.dwWidth, header.dwHeight);
qDebug() << "data size:" << img.data.size();
if (!fIn.read((char*)img.data.data(), img.data.size())) {
return Image(); // ERROR: read failed
}
qDebug() << "processed image size:" << fIn.tellg();
// done
return img;
}
const char *vertexShader =
"#ifdef GL_ES\n"
"// Set default precision to medium\n"
"precision mediump int;\n"
"precision mediump float;\n"
"#endif\n"
"\n"
"uniform mat4 mvp_matrix;\n"
"\n"
"attribute vec4 a_position;\n"
"attribute vec2 a_texcoord;\n"
"\n"
"varying vec2 v_texcoord;\n"
"\n"
"void main()\n"
"{\n"
" // Calculate vertex position in screen space\n"
" gl_Position = mvp_matrix * a_position;\n"
"\n"
" // Pass texture coordinate to fragment shader\n"
" // Value will be automatically interpolated to fragments inside polygon faces\n"
" v_texcoord = a_texcoord;\n"
"}\n";
const char *fragmentShader =
"#ifdef GL_ES\n"
"// Set default precision to medium\n"
"precision mediump int;\n"
"precision mediump float;\n"
"#endif\n"
"\n"
"uniform sampler2D texture;\n"
"\n"
"varying vec2 v_texcoord;\n"
"\n"
"void main()\n"
"{\n"
" // Set fragment color from texture\n"
"#if 0 // test check tex coords\n"
" gl_FragColor = vec4(1, v_texcoord.x, v_texcoord.y, 1);\n"
"#else // (not) 0;\n"
" gl_FragColor = texture2D(texture, v_texcoord);\n"
"#endif // 0\n"
"}\n";
struct Vertex {
QVector3D coord;
QVector2D texCoord;
Vertex(qreal x, qreal y, qreal z, qreal s, qreal t):
coord(x, y, z), texCoord(s, t)
{ }
};
class OpenGLWidget: public QOpenGLWidget, public QOpenGLFunctions_4_0_Core {
private:
const Image &_img;
QOpenGLShaderProgram _qGLSProg;
QOpenGLBuffer _qGLBufArray;
QOpenGLBuffer _qGLBufIndex;
QOpenGLTexture *_pQGLTex;
public:
explicit OpenGLWidget(const Image &img):
QOpenGLWidget(nullptr),
_img(img),
_qGLBufArray(QOpenGLBuffer::VertexBuffer),
_qGLBufIndex(QOpenGLBuffer::IndexBuffer),
_pQGLTex(nullptr)
{ }
virtual ~OpenGLWidget()
{
makeCurrent();
delete _pQGLTex;
_qGLBufArray.destroy();
_qGLBufIndex.destroy();
doneCurrent();
}
// disabled: (to prevent accidental usage)
OpenGLWidget(const OpenGLWidget&) = delete;
OpenGLWidget& operator=(const OpenGLWidget&) = delete;
protected:
virtual void initializeGL() override
{
initializeOpenGLFunctions();
glClearColor(0, 0, 0, 1);
initShaders();
initGeometry();
initTexture();
}
virtual void paintGL() override
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
_pQGLTex->bind();
QMatrix4x4 mat; mat.setToIdentity();
_qGLSProg.setUniformValue("mvp_matrix", mat);
_qGLSProg.setUniformValue("texture", 0);
// draw geometry
_qGLBufArray.bind();
_qGLBufIndex.bind();
quintptr offset = 0;
int coordLocation = _qGLSProg.attributeLocation("a_position");
_qGLSProg.enableAttributeArray(coordLocation);
_qGLSProg.setAttributeBuffer(coordLocation, GL_FLOAT, offset, 3, sizeof(Vertex));
offset += sizeof(QVector3D);
int texCoordLocation = _qGLSProg.attributeLocation("a_texcoord");
_qGLSProg.enableAttributeArray(texCoordLocation);
_qGLSProg.setAttributeBuffer(texCoordLocation, GL_FLOAT, offset, 2, sizeof(Vertex));
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_SHORT, 0);
//glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
}
private:
void initShaders()
{
if (!_qGLSProg.addShaderFromSourceCode(QOpenGLShader::Vertex,
QString::fromLatin1(vertexShader))) close();
if (!_qGLSProg.addShaderFromSourceCode(QOpenGLShader::Fragment,
QString::fromLatin1(fragmentShader))) close();
if (!_qGLSProg.link()) close();
if (!_qGLSProg.bind()) close();
}
void initGeometry()
{
Vertex vertices[] = {
// x y z s t
{ -1.0f, -1.0f, 0.0f, 0.0f, 0.0f },
{ +1.0f, -1.0f, 0.0f, 1.0f, 0.0f },
{ +1.0f, +1.0f, 0.0f, 1.0f, 1.0f },
{ -1.0f, +1.0f, 0.0f, 0.0f, 1.0f }
};
enum { nVtcs = sizeof vertices / sizeof *vertices };
// OpenGL ES doesn't have QUAD. A TRIANGLE_STRIP does as well.
GLushort indices[] = { 3, 0, 2, 1 };
//GLushort indices[] = { 0, 1, 2, 0, 2, 3 };
enum { nIdcs = sizeof indices / sizeof *indices };
_qGLBufArray.create();
_qGLBufArray.bind();
_qGLBufArray.allocate(vertices, nVtcs * sizeof (Vertex));
_qGLBufIndex.create();
_qGLBufIndex.bind();
_qGLBufIndex.allocate(indices, nIdcs * sizeof (GLushort));
}
void initTexture()
{
#if 0 // test whether texturing works at all
//_pQGLTex = new QOpenGLTexture(QImage("test.png").mirrored());
_pQGLTex = new QOpenGLTexture(QImage("test-dxt1.dds").mirrored());
#else // (not) 0
_pQGLTex = new QOpenGLTexture(QOpenGLTexture::Target2D);
_pQGLTex->setFormat(QOpenGLTexture::RGB_DXT1);
_pQGLTex->setSize(_img.w, _img.h);
_pQGLTex->allocateStorage(QOpenGLTexture::RGBA, QOpenGLTexture::UInt8);
_pQGLTex->setCompressedData((int)_img.data.size(), _img.data.data());
#endif // 0
_pQGLTex->setMinificationFilter(QOpenGLTexture::Nearest);
_pQGLTex->setMagnificationFilter(QOpenGLTexture::Nearest);
_pQGLTex->setWrapMode(QOpenGLTexture::ClampToEdge);
}
};
int main(int argc, char **argv)
{
qDebug() << "Qt Version:" << QT_VERSION_STR;
QApplication app(argc, argv);
// load a DDS image to get DTX1 raw data
Image img = loadDXT1("test-dxt1.dds");
// setup GUI
QMainWindow qWin;
OpenGLWidget qGLView(img);
/* I apply brute-force to get main window to sufficient size
* -> not appropriate for a productive solution...
*/
qGLView.setMinimumSize(img.w, img.h);
qWin.setCentralWidget(&qGLView);
qWin.show();
// exec. application
return app.exec();
}
For the test, I used (again) the sample file test-dxt1.dds.
And this is, how it looks (sample compiled with VS2013 and Qt 5.9.2):
Notes:
The texture is displayed upside-down. Please, consider that the original sample as well as my (excluded) code for texture loading from QImage applies a QImage::mirror(). It seems that QImage stores the data from top to bottom where OpenGL textures expect the opposite – from bottom to top. I guess the most easy would be to fix this after the texture is converted back to QImage.
My original intention was to implement also the part to read the texture back to a QImage (as described/sketched in the question). In general, I already did something like this in OpenGL (but without Qt). (I recently posted another answer OpenGL offscreen render about this. I have to admit that I had to cancel this plan due to a "time-out" issue. This was caused by some issues for which I needed quite long until I could fix them. I will share this experiences in the following as I'm thinking this could be helpful for others.
To find sample code for the initialization of the QOpenGLTexture with DXT1 data, I did a long google research – without success. Hence, I eye-scanned the Qt doc. of QOpenGLTexture for methods which looked promising/nessary to get it working. (I have to admit that I already did OpenGL texturing successfully but in pure OpenGL.) Finally, I got the actual set of necessary functions. It compiled and started but all I got was a black window. (Everytimes, I start something new with OpenGL it first ends up in a black or blue window – depending on what clear color I used resp...) So, I had a look into the qopengltexture.cpp on woboq.org (specifically in the implementation of QOpenGLTexture::QOpenGLTexture(QImage&, ...)). This didn't help much – they do it very similar as I tried.
The most essential problem, I could fix discussing this program with a colleague who contributed the final hint: I tried to get this running using QOpenGLFunctions. The last steps (toward the final fix) were trying this out with
_pQGLTex = new QOpenGLTexture(QImage("test.png").mirrored());
(worked) and
_pQGLTex = new QOpenGLTexture(QImage("test-dxt1.dds").mirrored());
(did not work).
This brought as to the idea that QOpenGLFunctions (which is claimed to be compatible to OpenGL ES 2.0) just seems not to enable S3 Texture loading. Hence, we replaced QOpenGLFunctions by QOpenGLFunctions_4_0_Core and, suddenly, it worked.
I did not overload the QOpenGLWidget::resizeGL() method as I use an identity matrix for the model-view-projection transformation of the OpenGL rendering. This is intended to have model space and clip space identical. Instead, I built a rectangle (-1, -1, 0) - (+1, +1, 0) which should exactly fill the (visible part of) the clip space x-y plane (and it does).
This can be checked visually by enabling the left-in debug code in the shader
gl_FragColor = vec4(1, v_texcoord.x, v_texcoord.y, 1);
which uses texture coordinates itself as green and blue color component. This makes a nice colored rectangle with red in the lower-left corner, magenta (red and blue) in the upper-left, yellow (red and green) in the lower-right, and white (red, green, and blue) in the upper-right corner. It shows that the rectangle fits perfectly.
As I forced the minimum size of the OpenGLWidget to the exact size of the texture image the texture to pixel mapping should be 1:1. I checked out what happens if magnification is set to Nearest – there was no visual difference.
I have to admit that the DXT1 data rendered as texture looks much better than the decompression I've exposed in my other answer. Considering, that these are the exact same data (read by my nearly identical loader) this let me think my own uncompress algorithm does not yet consider something (in other words: it still seems to be buggy). Hmm... (It seems that needs additional fixing.)

How to compare a memory bits in C++?

I need help with a memory bit comparison function.
I bought a LED Matrix here with 4 x HT1632C chips and I'm using it on my Arduino Mega2560.
There're no code available for this chipset (it's not the same as HT1632) and I'm writing on my own. I have a plot function that gets x,y coordinates and a color and that pixel turn on. Only this is working perfectly.
But I need more performance on my display, so I tried to make a shadowRam variable that is a "copy" of my device memory. Before I plot anything on display it checks on shadowRam to see if it's really necessary to change that pixel. When I enabled this (getShadowRam) on the plot function my display has some, just SOME (like 3 or 4 on entire display) ghost pixels (pixels that are not supposed to be turned on).
If I just comment the prev_color if's on my plot function it works perfectly.
Also, I'm cleaning my shadowRam array setting all matrix to zero.
Variables:
#define BLACK 0
#define GREEN 1
#define RED 2
#define ORANGE 3
#define CHIP_MAX 8
byte shadowRam[63][CHIP_MAX-1] = {0};
getShadowRam function:
byte HT1632C::getShadowRam(byte x, byte y) {
byte addr, bitval, nChip;
if (x>=32) {
nChip = 3 + x/16 + (y>7?2:0);
} else {
nChip = 1 + x/16 + (y>7?2:0);
}
bitval = 8>>(y&3);
x = x % 16;
y = y % 8;
addr = (x<<1) + (y>>2);
if ((shadowRam[addr][nChip-1] & bitval) && (shadowRam[addr+32][nChip-1] & bitval)) {
return ORANGE;
} else if (shadowRam[addr][nChip-1] & bitval) {
return GREEN;
} else if (shadowRam[addr+32][nChip-1] & bitval) {
return RED;
} else {
return BLACK;
}
}
Plot function:
void HT1632C::plot (int x, int y, int color)
{
if (x<0 || x>X_MAX || y<0 || y>Y_MAX)
return;
if (color != BLACK && color != GREEN && color != RED && color != ORANGE)
return;
char addr, bitval;
byte nChip;
byte prev_color = HT1632C::getShadowRam(x,y);
bitval = 8>>(y&3);
if (x>=32) {
nChip = 3 + x/16 + (y>7?2:0);
} else {
nChip = 1 + x/16 + (y>7?2:0);
}
x = x % 16;
y = y % 8;
addr = (x<<1) + (y>>2);
switch(color) {
case BLACK:
if (prev_color != BLACK) { // compare with memory to only set if pixel is other color
// clear the bit in both planes;
shadowRam[addr][nChip-1] &= ~bitval;
HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]);
shadowRam[addr+32][nChip-1] &= ~bitval;
HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]);
}
break;
case GREEN:
if (prev_color != GREEN) { // compare with memory to only set if pixel is other color
// set the bit in the green plane and clear the bit in the red plane;
shadowRam[addr][nChip-1] |= bitval;
HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]);
shadowRam[addr+32][nChip-1] &= ~bitval;
HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]);
}
break;
case RED:
if (prev_color != RED) { // compare with memory to only set if pixel is other color
// clear the bit in green plane and set the bit in the red plane;
shadowRam[addr][nChip-1] &= ~bitval;
HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]);
shadowRam[addr+32][nChip-1] |= bitval;
HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]);
}
break;
case ORANGE:
if (prev_color != ORANGE) { // compare with memory to only set if pixel is other color
// set the bit in both the green and red planes;
shadowRam[addr][nChip-1] |= bitval;
HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]);
shadowRam[addr+32][nChip-1] |= bitval;
HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]);
}
break;
}
}
If it helps:
The datasheet of board I'm using. Page 7 has the memory mapping I'm using.
Also, I have a video of display working.
This isn't a real answer, but I think it might be a step towards figuring this out. Since there is so much code duplication and confusing conditional code, you should start with a refactor. It will then be much easier to understand the algorithm. I've taken a stab at it, though no promises that it will be bug free.
Get rid of getShadowRam, and modify plot to look like this:
void HT1632C::plot (int x, int y, byte color)
{
if (x < 0 || x > X_MAX || y < 0 || y > Y_MAX)
return;
if (color != BLACK && color != GREEN && color != RED && color != ORANGE)
return;
// using local struct to allow local function definitions
struct shadowRamAccessor {
shadowRamAccessor(byte x, byte y) {
nChip = (x >= 32 ? 3 : 1)
+ x / 16
+ (y > 7 ? 2 : 0);
bitval = 8 >> (y & 3);
addr = ((x % 16) << 1) + ((y % 8) >> 2);
highAddr = addr + 32;
}
byte& getShadowRam(byte addr) {
return shadowRam[addr][nChip-1];
}
byte getPreviousColor() {
byte greenComponent = getShadowRam(addr) & bitval ? GREEN : BLACK;
byte redComponent = getShadowRam(highAddr) & bitval ? RED : BLACK;
return greenComponent | redComponent;
}
void setValue(byte newColor) {
byte prev_color = getPreviousColor();
if(newColor != prev_color)
setValue(newColor & GREEN, newColor & RED);
}
void setValue(bool greenBit, bool redBit)
{
HT1632C::sendData(nChip, addr,
greenBit
? getShadowRam(addr) |= bitval
: getShadowRam(addr) &= !bitval
);
HT1632C::sendData(nChip, highAddr,
redBit
? getShadowRam(highAddr) |= bitval
: getShadowRam(highAddr) &= ~bitval
);
}
byte nChip, bitval, addr, highAddr;
};
shadowRamAccessor(x, y).setValue(color);
}
Uhm.. Probably I'm missing something here, but why don't you change that big switch to:
if(color != prev_color)
{
shadowRam[addr][nChip-1] |= bitval;
HT1632C::sendData(nChip, addr, shadowRam[addr][nChip-1]);
shadowRam[addr+32][nChip-1] &= ~bitval;
HT1632C::sendData(nChip, addr+32, shadowRam[addr+32][nChip-1]);
}

How do I display/draw a .ply object in OpenGL?

I'm trying to make OpenGL draw the figure that I'm loading with OPENFILENAME. What I've got right now is: I can display the comments, vertex, how many faces, etc., but I cannot draw the figure and I'm not sure how to do it. I can draw other predetermined figures, but not the ones I'm trying to open.
This is where I'm initializing everything:
case WM_CREATE:
hDC = GetDC(hWnd);
hRC=wglCreateContext(hDC);
wglMakeCurrent(hDC,hRC);
g_hwndDlg = CreateDialog(hInst,MAKEINTRESOURCE(IDD_DIALOG1),hWnd,DialogProc);
Figure = new DrawFigure();
initGL();
break;
This is where I find out what the element I'm opening has:
/* go through each kind of element that we learned is in the file */
/* and read them */
for (i = 0; i < nelems; i++) {
/* get the description of the first element */
elem_name = elist[i];
plist = ply_get_element_description (ply, elem_name, &num_elems, &nprops);
int el=sprintf(szFile,"element %s %d\n", elem_name, num_elems);
/* print the name of the element, for debugging */
TextOut(hDC,150,0+i*20,szFile,el);
/* if we're on vertex elements, read them in */
if (equal_strings ("vertex", elem_name)) {
/* create a vertex list to hold all the vertices */
vlist = (Vertex **) malloc (sizeof (Vertex *) * num_elems);
/* set up for getting vertex elements */
ply_get_property (ply, elem_name, &vert_props[0]);
ply_get_property (ply, elem_name, &vert_props[1]);
ply_get_property (ply, elem_name, &vert_props[2]);
/* grab all the vertex elements */
for (j = 0; j < num_elems; j++) {
int move=10;
/* grab and element from the file */
vlist[j] = (Vertex *) malloc (sizeof (Vertex));
ply_get_element (ply, (void *) vlist[j]);
int vert=sprintf(szFile,"vertex: %g %g %g", vlist[j]->x, vlist[j]->y, vlist[j]->z);
/* print out vertex x,y,z for debugging */
TextOut(hDC,600,move+j*20,szFile,vert);
Figure->Parameters(vlist[j]->x, vlist[j]->y, vlist[j]->z);
}
}
And this is where the class Figure is, where I'm suppossed to draw everything:
Figure::Figure(){
}
void Figure::Parameters(float x,float y,float z)
{
this->x1=x;
this->y1=y;
this->z1=z;
}
void Figure::Draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(0.0,0.0,4.0,0.0,0.0,0.0,0.0,1.0,0.0);
glBegin(GL_TRIANGLES);
glNormal3f(x1,y1,z1);
glVertex3f(x1,y1,z1);
glEnd();
}
x1,y1,z1 are declared in Figure.h
I tried to explain myself the best I could; if you think it still needs more explanation please tell me and I will try to explain it in a different way
Yeah, I forgot to explain I guess the figure I'm trying to draw...well i don't know which figure it would be because I'm using OPENFILENAME to open 1 random figure and draw it i used triangles because i thought that with triangles i could draw anything and also i tried in the class Parameters ask for the number of vertex I'm dealing with and making a "for" in the class Draw but it didn't work
You only specify one vertex between your begin/end.. you need at least 3 to specify a triangle. And many more if you want a whole buncha triangles. You need something more along the lines of this:
void Figure::Parameters(float x, float y, float z)
{
m_vertices.push_back(myVertex(x, y, z));
}
void Figure::Draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(0.0,0.0,4.0,0.0,0.0,0.0,0.0,1.0,0.0);
glBegin(GL_TRIANGLES);
assert(m_vertices.size() % 3 == 0); // since we're drawing triangles
for(size_t i=0; i<m_vertices.size(); i++)
{
glNormal3f(m_vertices[i].x,m_vertices[i].y,m_vertices[i].z);
glVertex3f(m_vertices[i].x,m_vertices[i].y,m_vertices[i].z);
}
glEnd();
}

How to get a C method to accept UIImage parameter?

I am trying to do some Image processing on a UIImage using some EAGLView code from the GLImageProcessing sample from Apple. The sample code is configured to perform processing to a pre-installed image (Image.png). I am trying to modify the code so that it will accept a UIImage (or at least CGImage data) of my choice and process that instead. Problem is, the texture-loader method loadTexture() (below) seems to accept only C structures as parameters, and I have not been able to get it to accept a UIImage* or a CGImage as a parameter. Can someone give me a clue as how to bridge the gap so that I can pass my UIImage into the C-method?
------------ from Texture.h ---------------
#ifndef TEXTURE_H
#define TEXTURE_H
#include "Imaging.h"
void loadTexture(const char *name, Image *img, RendererInfo *renderer);
#endif /* TEXTURE_H */
----------------from Texture.m---------------------
#import <UIKit/UIKit.h>
#import "Texture.h"
static unsigned int nextPOT(unsigned int x)
{
x = x - 1;
x = x | (x >> 1);
x = x | (x >> 2);
x = x | (x >> 4);
x = x | (x >> 8);
x = x | (x >>16);
return x + 1;
}
// This is not a fully generalized image loader. It is an example of how to use
// CGImage to directly access decompressed image data. Only the most commonly
// used image formats are supported. It will be necessary to expand this code
// to account for other uses, for example cubemaps or compressed textures.
//
// If the image format is supported, this loader will Gen a OpenGL 2D texture object
// and upload texels from it, padding to POT if needed. For image processing purposes,
// border pixels are also replicated here to ensure proper filtering during e.g. blur.
//
// The caller of this function is responsible for deleting the GL texture object.
void loadTexture(const char *name, Image *img, RendererInfo *renderer)
{
GLuint texID = 0, components, x, y;
GLuint imgWide, imgHigh; // Real image size
GLuint rowBytes, rowPixels; // Image size padded by CGImage
GLuint POTWide, POTHigh; // Image size padded to next power of two
CGBitmapInfo info; // CGImage component layout info
CGColorSpaceModel colormodel; // CGImage colormodel (RGB, CMYK, paletted, etc)
GLenum internal, format;
GLubyte *pixels, *temp = NULL;
CGImageRef CGImage = [UIImage imageNamed:[NSString stringWithUTF8String:name]].CGImage;
rt_assert(CGImage);
if (!CGImage)
return;
// Parse CGImage info
info = CGImageGetBitmapInfo(CGImage); // CGImage may return pixels in RGBA, BGRA, or ARGB order
colormodel = CGColorSpaceGetModel(CGImageGetColorSpace(CGImage));
size_t bpp = CGImageGetBitsPerPixel(CGImage);
if (bpp < 8 || bpp > 32 || (colormodel != kCGColorSpaceModelMonochrome && colormodel != kCGColorSpaceModelRGB))
{
// This loader does not support all possible CGImage types, such as paletted images
CGImageRelease(CGImage);
return;
}
components = bpp>>3;
rowBytes = CGImageGetBytesPerRow(CGImage); // CGImage may pad rows
rowPixels = rowBytes / components;
imgWide = CGImageGetWidth(CGImage);
imgHigh = CGImageGetHeight(CGImage);
img->wide = rowPixels;
img->high = imgHigh;
img->s = (float)imgWide / rowPixels;
img->t = 1.0;
// Choose OpenGL format
switch(bpp)
{
default:
rt_assert(0 && "Unknown CGImage bpp");
case 32:
{
internal = GL_RGBA;
switch(info & kCGBitmapAlphaInfoMask)
{
case kCGImageAlphaPremultipliedFirst:
case kCGImageAlphaFirst:
case kCGImageAlphaNoneSkipFirst:
format = GL_BGRA;
break;
default:
format = GL_RGBA;
}
break;
}
case 24:
internal = format = GL_RGB;
break;
case 16:
internal = format = GL_LUMINANCE_ALPHA;
break;
case 8:
internal = format = GL_LUMINANCE;
break;
}
// Get a pointer to the uncompressed image data.
//
// This allows access to the original (possibly unpremultiplied) data, but any manipulation
// (such as scaling) has to be done manually. Contrast this with drawing the image
// into a CGBitmapContext, which allows scaling, but always forces premultiplication.
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(CGImage));
rt_assert(data);
pixels = (GLubyte *)CFDataGetBytePtr(data);
rt_assert(pixels);
// If the CGImage component layout isn't compatible with OpenGL, fix it.
// On the device, CGImage will generally return BGRA or RGBA.
// On the simulator, CGImage may return ARGB, depending on the file format.
if (format == GL_BGRA)
{
uint32_t *p = (uint32_t *)pixels;
int i, num = img->wide * img->high;
if ((info & kCGBitmapByteOrderMask) != kCGBitmapByteOrder32Host)
{
// Convert from ARGB to BGRA
for (i = 0; i < num; i++)
p[i] = (p[i] << 24) | ((p[i] & 0xFF00) << 8) | ((p[i] >> 8) & 0xFF00) | (p[i] >> 24);
}
// All current iPhoneOS devices support BGRA via an extension.
if (!renderer->extension[IMG_texture_format_BGRA8888])
{
format = GL_RGBA;
// Convert from BGRA to RGBA
for (i = 0; i < num; i++)
#if __LITTLE_ENDIAN__
p[i] = ((p[i] >> 16) & 0xFF) | (p[i] & 0xFF00FF00) | ((p[i] & 0xFF) << 16);
#else
p[i] = ((p[i] & 0xFF00) << 16) | (p[i] & 0xFF00FF) | ((p[i] >> 16) & 0xFF00);
#endif
}
}
// Determine if we need to pad this image to a power of two.
// There are multiple ways to deal with NPOT images on renderers that only support POT:
// 1) scale down the image to POT size. Loses quality.
// 2) pad up the image to POT size. Wastes memory.
// 3) slice the image into multiple POT textures. Requires more rendering logic.
//
// We are only dealing with a single image here, and pick 2) for simplicity.
//
// If you prefer 1), you can use CoreGraphics to scale the image into a CGBitmapContext.
POTWide = nextPOT(img->wide);
POTHigh = nextPOT(img->high);
if (!renderer->extension[APPLE_texture_2D_limited_npot] && (img->wide != POTWide || img->high != POTHigh))
{
GLuint dstBytes = POTWide * components;
GLubyte *temp = (GLubyte *)malloc(dstBytes * POTHigh);
for (y = 0; y < img->high; y++)
memcpy(&temp[y*dstBytes], &pixels[y*rowBytes], rowBytes);
img->s *= (float)img->wide/POTWide;
img->t *= (float)img->high/POTHigh;
img->wide = POTWide;
img->high = POTHigh;
pixels = temp;
rowBytes = dstBytes;
}
// For filters that sample texel neighborhoods (like blur), we must replicate
// the edge texels of the original input, to simulate CLAMP_TO_EDGE.
{
GLuint replicatew = MIN(MAX_FILTER_RADIUS, img->wide-imgWide);
GLuint replicateh = MIN(MAX_FILTER_RADIUS, img->high-imgHigh);
GLuint imgRow = imgWide * components;
for (y = 0; y < imgHigh; y++)
for (x = 0; x < replicatew; x++)
memcpy(&pixels[y*rowBytes+imgRow+x*components], &pixels[y*rowBytes+imgRow-components], components);
for (y = imgHigh; y < imgHigh+replicateh; y++)
memcpy(&pixels[y*rowBytes], &pixels[(imgHigh-1)*rowBytes], imgRow+replicatew*components);
}
if (img->wide <= renderer->maxTextureSize && img->high <= renderer->maxTextureSize)
{
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_2D, texID);
// Set filtering parameters appropriate for this application (image processing on screen-aligned quads.)
// Depending on your needs, you may prefer linear filtering, or mipmap generation.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, internal, img->wide, img->high, 0, format, GL_UNSIGNED_BYTE, pixels);
}
if (temp) free(temp);
CFRelease(data);
CGImageRelease(CGImage);
img->texID = texID;
}
Side Note: The above code is the original and unmodified sample code from Apple and does not generate any errors when compiled. However, when I try to modify the .h and .m to accept a UIImage* parameter (as below) the compiler generates the following error:"Error: expected declaration specifiers or "..." before UIImage"
----------Modified .h Code that generates the Compiler Error:-------------
void loadTexture(const char name, Image *img, RendererInfo *renderer, UIImage* newImage)
You are probably importing this .h into a .c somewhere. That tells the compiler to use C rather than Objective-C. UIKit.h (and it's many children) are in Objective-C and cannot be compiled by a C compiler.
You can rename all you .c files to .m, but what you really probably want is just to use CGImageRef and import CGImage.h. CoreGraphics is C-based. UIKit is Objective-C. There is no problem, if you want, for Texture.m to be in Objective-C. Just make sure that Texture.h is pure C. Alternatively (and I do this a lot with C++ code), you can make a Texture+C.h header that provides just the C-safe functions you want to expose. Import Texture.h in Objective-C code, and Texture+C.h in C code. Or name them the other way around if more convenient, with a Texture+ObjC.h.
It sounds like your file isn't importing the UIKit header.
WHy are you passing new image to loadTexture, instead of using loadTexture's own UImage loading to open the new image you want?
loadTexture:
void loadTexture(const char *name, Image *img, RendererInfo *renderer)
{
GLuint texID = 0, components, x, y;
GLuint imgWide, imgHigh; // Real image size
GLuint rowBytes, rowPixels; // Image size padded by CGImage
GLuint POTWide, POTHigh; // Image size padded to next power of two
CGBitmapInfo info; // CGImage component layout info
CGColorSpaceModel colormodel; // CGImage colormodel (RGB, CMYK, paletted, etc)
GLenum internal, format;
GLubyte *pixels, *temp = NULL;
[Why not have the following fetch your UIImage?]
CGImageRef CGImage = [UIImage imageNamed:[NSString stringWithUTF8String:name]].CGImage;
rt_assert(CGImage);
if (!CGImage)
return;