I am trying to create a video file from an animation I have in OpenGL.
I have been reading on how to do that and to my understanding there are two options:
Save each rendered frame in OpenGL to an image file and then create a video file from those
Get the frame data using glReadPixels() and on the fly write those to a video file
The second approach is what I believe would work best for me, however, I cannot find info on how to achieve the second part (write to a video file).
Can anyone point me out to some web sites where I learn how to do that? What kind of libraries are out there that I can use to encode(?) a video from the frames I am rendering in OpenGL?
EDIT
After searching a bit more about this, I believe ffmpeg is the way to go. I found this blog that has a code that apparently works on windows.
I have downloaded ffmpeg from the website so that I can execute the command just as in the example. Unfortunately, my application crashes and no video is being created. I checked for the file pointer to be valid but it is not, so I believe the error comes from the execution of the function popen.
I am passing the exact same arguments as the command but still no valid file pointer, any idea on what could be happening?
The thing is, I don't want to spend much time coding the video encoding since I have other projects to work on.
Since I couldn't use ffmpeg directly from my c++ code, a possible solution is as follows. In Qt5 you have the function paintGL where you update the frame to be rendered. After it, get the pixels with glReadPixels and then just save the frame as a png image using QImage
void OpenGLViewer::paintGL()
{
// Clear screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Update array attached to OpenGL
glDrawArrays(GL_POINTS, 0, _points);
update();
glReadPixels(0, 0, this->width(), this->height(), GL_RGBA, GL_UNSIGNED_BYTE, _buffer);
std::stringstream name;
name << "Frame" << _frame++ << ".png";
QString filename(name.str().c_str());
QImage imagen(_buffer, this->width(), this->height(), QImage::Format_ARGB32);
imagen.save(filename, "PNG");
}
This will leave a bunch of images in your working directory that you can encode in a video using the following command from the console
ffmpeg -framerate 30 -start_number 0 -i Frame%d.png -vcodec mpeg4 -vf vflip test.avi
I still have to check why the colors are inverted but for now this works fine since the animation is the important thing and not the colors.
You can do in this way after installing libpng:
uint8_t pixels = new uint8_t[wh*3];
glReadPixels(0, 0, w, h, GL_RGB, GL_UNSIGNED_BYTE, (GLvoid *) pixels);
for (int j = 0; j * 2 < h; ++j) {
int x = j * w * 3;
int y = (h - 1 - j) * w * 3;
for (int i = w * 3; i > 0; --i) {
uint8_t tmp = pixels[x];
pixels[x] = pixels[y];
pixels[y] = tmp;
++x;
++y;
}
}
png_structp png = png_create_write_struct(PNG_LIBPNG_VER_STRING, nullptr, nullptr, nullptr);
if (!png)
return false;
png_infop info = png_create_info_struct(png);
if (!info) {
png_destroy_write_struct(&png, &info);
return false;
}
std::string s = "IMAGE/"+string(filename);
FILE *fp = fopen(s.c_str(), "wb");
if (!fp) {
png_destroy_write_struct(&png, &info);
return false;
}
png_init_io(png, fp);
png_set_IHDR(png, info, w, h, 8 , PNG_COLOR_TYPE_RGB, PNG_INTERLACE_NONE,
PNG_COMPRESSION_TYPE_BASE, PNG_FILTER_TYPE_BASE);
png_colorp palette = (png_colorp)png_malloc(png, PNG_MAX_PALETTE_LENGTH * sizeof(png_color));
if (!palette) {
fclose(fp);
png_destroy_write_struct(&png, &info);
return false;
}
png_set_PLTE(png, info, palette, PNG_MAX_PALETTE_LENGTH);
png_write_info(png, info);
png_set_packing(png);
png_bytepp rows = (png_bytepp)png_malloc(png, h * sizeof(png_bytep));
for (int i = 0; i < h; ++i)
rows[i] = (png_bytep)(pixels + (h - i - 1) * w * 3);
png_write_image(png, rows);
png_write_end(png, info);
png_free(png, palette);
png_destroy_write_struct(&png, &info);
fclose(fp);
delete[] rows;
Related
I have to load a 24 bit BMP file at a certain (x,y) index of glut window from a file using OpenGL. I have found a function that uses glaux library to do so. Here the color mentioned in ignoreColor is ignored during rendering.
void iShowBMP(int x, int y, char filename[], int ignoreColor)
{
AUX_RGBImageRec *TextureImage;
TextureImage = auxDIBImageLoad(filename);
int i,j,k;
int width = TextureImage->sizeX;
int height = TextureImage->sizeY;
int nPixels = width * height;
int *rgPixels = new int[nPixels];
for (i = 0, j=0; i < nPixels; i++, j += 3)
{
int rgb = 0;
for(int k = 2; k >= 0; k--)
{
rgb = ((rgb << 8) | TextureImage->data[j+k]);
}
rgPixels[i] = (rgb == ignoreColor) ? 0 : 255;
rgPixels[i] = ((rgPixels[i] << 24) | rgb);
}
glRasterPos2f(x, y);
glDrawPixels(width, height, GL_RGBA, GL_UNSIGNED_BYTE, rgPixels);
delete []rgPixels;
free(TextureImage->data);
free(TextureImage);
}
But the problem is that glaux is now obsolete. If I call this function, the image is rendered and shown for a minute, then an error pops up (without any error message) and the glut window disappears. From the returned value shown in the console, it seems like a runtime error.
Is there any alternative to this function that doesn't use glaux? I have seen cimg, devil etc but none of them seems to work like this iShowBMP function. I am doing my project in Codeblocks.
I have to load every frame to keep the implementation consistent with other parts of the program. Also, the bmp file whose name has been passed as a parameter to the function has both width and height in powers of 2.
The last two free() statements were not getting executed for some unknown reasons, so the memory consumption was increasing. That's why the program was crashing after a moment. Later I solved it using stb_image.h.
I’m developing a plugin for SIMDIS (basically military google earth), written in c++ using VS 2012. It’s a pretty nifty little thing to auto plot points, and one of its functions is to take a series of screenshot of the view-port and save the images off so it can be used/processed somewhere else. This works fine too… until you re-size the view-port one too many times. Re-size is done by clicking the corner of the window and dragging it bigger and smaller, and the program may launch full screen or windowed mode; either way it works fine the first few sets… or as long as the window is not re-sized.
When it breaks, the program will still march happily along, create the files, and filling them with data at what seems to be an appropriate size for whatever resolution image I’m trying to generate… but the format becomes no-good. It will still be a *.bmp, but windows stops being able to understand it. No errors are thrown though, (I think, I’m not catching any GL errors?[if that’s possible?]).
I can’t get it to consistently happen with a specific number of actions, but it seems to start failing after 3-7 view-port re-sizes. I don’t know if this is a problem with my screenshot code, an issue with the SIMDIS program or plugin, a GL issue, or what. I’ve tested it on multiple machines.
Has anyone run into this problem before? Is there something specific I should be doing that I’m not? Is this a problem native to the parent program (SIMDIS), or something I can work with/around with GL commands I don’t know about?
Screenshot code follows:
#include "TakeScreenshot.h" //has "#include <gl/GL.h>" etc...
TakeScreenshot::TakeScreenshot()
{
}
std::vector<int> * TakeScreenshot::TakeAScreenshotBMP(const char* filename)
{
//std::cout << "Screenshot! ";
std::vector<int> * returnVec = new std::vector<int>();
int VPort[4] = {0,0,0,0};
int FSize = 0;
int PackStore = 0;
//get GL viewport dimensions, x,y,w,h into vport
glGetIntegerv(GL_VIEWPORT,VPort);
//make a framebuffer, RGB
FSize = VPort[2]*VPort[3]*3;
unsigned char PStore[8294400];// 4k sized buffer
//store settings
glGetIntegerv(GL_PACK_ALIGNMENT, &PackStore);
//unpack to byte order
glPixelStorei(GL_PACK_ALIGNMENT, 1);
//read the gl buffer into our buffer
glReadPixels(VPort[0],VPort[1],VPort[2],VPort[3],GL_RGB,GL_UNSIGNED_BYTE,&PStore);
//Pass back settings
glPixelStorei(GL_PACK_ALIGNMENT, PackStore);
///
//set up file info
///
BITMAPINFOHEADER BMIH; //info header
BMIH.biSize = sizeof(BITMAPINFOHEADER);
BMIH.biSizeImage= VPort[2] * VPort[3] * 3;
BMIH.biWidth = VPort[2];
BMIH.biHeight = VPort[3];
BMIH.biPlanes = 1;
BMIH.biBitCount = 24;
BMIH.biCompression = BI_RGB;
BITMAPFILEHEADER bmfh;//file header
int nBitsOffset = sizeof(BITMAPFILEHEADER) + BMIH.biSize;
LONG lImageSize = BMIH.biSizeImage;
LONG lFileSize = nBitsOffset + lImageSize;
bmfh.bfType = 'B' + ('M'<<8);
bmfh.bfOffBits = nBitsOffset;
bmfh.bfSize = lFileSize;
bmfh.bfReserved1 = bmfh.bfReserved2 = 0;
// swap r and b values because GL has them backwards for BMP format.
unsigned char SwapByte;
for(int loop = 0; loop<FSize; loop+=3)
{
SwapByte = PStore[loop];
PStore[loop] = PStore[loop+2];
PStore[loop +2] = SwapByte;
}
///
// File writing section
///
FILE *pFile;
pFile = fopen(filename, "wb");
//if something borked
if(pFile == NULL)
{
std::cout << "TakeScreenshot::TakeAScreenshotBMP>> Error; was not able to create file (Permisions?)" << std::endl;
returnVec->push_back(-1);
returnVec->push_back(-1);
return returnVec; //exit
}
UINT nWrittenFileHeaderSize = fwrite(&bmfh,1,sizeof(BITMAPFILEHEADER), pFile);
UINT nWrittenInfoHeaderSize = fwrite(&BMIH,1,sizeof(BITMAPINFOHEADER), pFile);
UINT nWrittenDIBDataSize = fwrite(&PStore, 1, lImageSize, pFile);
fclose(pFile);
//some return data for processing later
returnVec->push_back(VPort[2]);
returnVec->push_back(VPort[3]);
return returnVec;
}
TakeScreenshot::~TakeScreenshot(void)
{
}
Based on muxing sample that comes with FFmpeg docs, I have modified it, from input format as S16 to FLTP (planar stereo), and outputting to webm format (stereo).
Since input is now FLTP, I am filling two arrays, then encoding again to FLTP. There are no obvious errors given on screen, but the resulting webm video does not play any audio (just the video content). This is just proof of concept in understanding things; here is an added (crude) function to fill up input FLTP stereo buffer:
static void get_audio_frame_for_planar_stereo(int16_t **samples, int frame_size, int nb_channels)
{
int j, i, v[2];
int16_t *q1 = (int16_t *) samples[0];
int16_t *q2 = (int16_t *) samples[1];
for (j = 0; j < frame_size; j++)
{
v[0] = (int)(sin(t) * 10000);
v[1] = (int)(tan(t) * 10000);
*q1++ = v[0];
*q2++ = v[1];
t += tincr;
tincr += tincr2;
}
}
Which I am calling from inside write_audio_frame() function.
Note also, wherever code reffered AV_SAMPLE_FMT_S16 as input, I have changed to AV_SAMPLE_FMT_FLTP.
Whole workable source is here:
https://gist.github.com/anonymous/05d1d7662e9feafc45a6
When run with ffprobe.exe, with these instructions:
ffprobe -show_packets output.webm >output.txt
I see nothing out of ordinary, all pts/dts values appear to be in place:
https://gist.github.com/anonymous/3ed0d6308700ab991704
Could someone highlight cause of this mis-interpretation?
Thanks for your time...
p.s. I am using Zeranoe FFmpeg Windows builds (32 bit), built on Jan 9 2014 22:04:35 with gcc 4.8.2.(GCC)
Edit: Based on your guidance elsewhere, I tried the following:
/* set options */
//av_opt_set_int (swr_ctx, "in_channel_count", c->channels, 0);
//av_opt_set_int (swr_ctx, "in_sample_rate", c->sample_rate, 0);
//av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_FLTP, 0);
//av_opt_set_int (swr_ctx, "out_channel_count", c->channels, 0);
//av_opt_set_int (swr_ctx, "out_sample_rate", c->sample_rate, 0);
//av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", c->sample_fmt, 0);
av_opt_set_int(swr_ctx, "in_channel_layout", AV_CH_LAYOUT_STEREO, 0);
av_opt_set_int(swr_ctx, "in_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_FLTP, 0);
av_opt_set_int(swr_ctx, "out_channel_layout", AV_CH_LAYOUT_STEREO, 0);
av_opt_set_int(swr_ctx, "out_sample_rate", c->sample_rate, 0);
av_opt_set_sample_fmt(swr_ctx, "out_sample_fmt", AV_SAMPLE_FMT_FLTP, 0);
And the revised function:
static void get_audio_frame_for_planar_stereo(uint8_t **samples, int frame_size, int nb_channels)
{
int j, i;
float v[2];
float *q1 = (float *) samples[0];
float *q2 = (float *) samples[1];
for (j = 0; j < frame_size; j++)
{
v[0] = (tan(t) * 1);
v[1] = (sin(t) * 1);
*q1++ = v[0];
*q2++ = v[1];
t += tincr;
tincr += tincr2;
}
}
Now it appears to be working properly. I tried changing function parameters from uint8_t** to float**, as well as src_samples_data from uint8_t** to float**, but did not make any difference, in a view.
Updated code: https://gist.github.com/anonymous/35371b2c106961029c3d
Thanks for highlighting the place(s) that result in this behavior!
With AV_SAMPLE_FMT_FLTP each sample must be a 32-bit float value (from -1.0 to 1.0). You also initialize the resampler to accept the floats:
av_opt_set_sample_fmt(swr_ctx, "in_sample_fmt", AV_SAMPLE_FMT_FLTP, 0);
but feeding it up with the array of ints:
get_audio_frame_for_planar_stereo( (int16_t **)src_samples_data, src_nb_samples, c->channels );
I'm trying to get the Kinect depth camera pixels to overlay onto the RGB camera. I am using the C++ Kinect 1.0 SDK with an Xbox Kinect, OpenCV and trying to use the new "NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution" method.
I have watched the image render itself in slow motion and looks as if pixels are being drawn multiple times in the one frame. It first draws itself from the top and left borders, then it gets to a point (you can see a 45 degree angle in there) where it starts drawing weird.
I have been trying to base my code off of the C# code written by Adam Smith at the MSDN forums but no dice. I have stripped out the overlay stuff and just want to draw the depth normalized depth pixels where it "should" be in the RGB image.
The image on the left is what I'm getting when trying to fit the depth image to RGB space, and the image on the right is the "raw" depth image as I like to see it. I was hoping this my method would create a similar image to the one on the right with slight distortions.
This is the code and object definitions that I have at the moment:
// From initialization
INuiSensor *m_pNuiInstance;
NUI_IMAGE_RESOLUTION m_nuiResolution = NUI_IMAGE_RESOLUTION_640x480;
HANDLE m_pDepthStreamHandle;
IplImage *m_pIplDepthFrame;
IplImage *m_pIplFittedDepthFrame;
m_pIplDepthFrame = cvCreateImage(cvSize(640, 480), 8, 1);
m_pIplFittedDepthFrame = cvCreateImage(cvSize(640, 480), 8, 1);
// Method
IplImage *Kinect::GetRGBFittedDepthFrame() {
static long *pMappedBits = NULL;
if (!pMappedBits) {
pMappedBits = new long[640*480*2];
}
NUI_IMAGE_FRAME pNuiFrame;
NUI_LOCKED_RECT lockedRect;
HRESULT hr = m_pNuiInstance->NuiImageStreamGetNextFrame(m_pDepthStreamHandle, 0, &pNuiFrame);
if (FAILED(hr)) {
// return the older frame
return m_pIplFittedDepthFrame;
}
bool hasPlayerData = HasSkeletalEngine(m_pNuiInstance);
INuiFrameTexture *pTexture = pNuiFrame.pFrameTexture;
pTexture->LockRect(0, &lockedRect, NULL, 0);
if (lockedRect.Pitch != 0) {
cvZero(m_pIplFittedDepthFrame);
hr = m_pNuiInstance->NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution(
m_nuiResolution,
NUI_IMAGE_RESOLUTION_640x480,
640 * 480, /* size is previous */ (unsigned short*) lockedRect.pBits,
(640 * 480) * 2, /* size is previous */ pMappedBits);
if (FAILED(hr)) {
return m_pIplFittedDepthFrame;
}
for (int i = 0; i < lockedRect.size; i++) {
unsigned char* pBuf = (unsigned char*) lockedRect.pBits + i;
unsigned short* pBufS = (unsigned short*) pBuf;
unsigned short depth = hasPlayerData ? ((*pBufS) & 0xfff8) >> 3 : ((*pBufS) & 0xffff);
unsigned char intensity = depth > 0 ? 255 - (unsigned char) (256 * depth / 0x0fff) : 0;
long
x = pMappedBits[i], // tried with *(pMappedBits + (i * 2)),
y = pMappedBits[i + 1]; // tried with *(pMappedBits + (i * 2) + 1);
if (x >= 0 && x < m_pIplFittedDepthFrame->width && y >= 0 && y < m_pIplFittedDepthFrame->height) {
m_pIplFittedDepthFrame->imageData[x + y * m_pIplFittedDepthFrame->widthStep] = intensity;
}
}
}
pTexture->UnlockRect(0);
m_pNuiInstance->NuiImageStreamReleaseFrame(m_pDepthStreamHandle, &pNuiFrame);
return(m_pIplFittedDepthFrame);
}
Thanks
I have found that the problem was that the loop,
for (int i = 0; i < lockedRect.size; i++) {
// code
}
was iterating on a per-byte basis, not on a per-short (2 bytes) basis. Since lockedRect.size returns the number of bytes the fix was simply changing the increment to i += 2, even better would be changing it to sizeof(short), like so,
for (int i = 0; i < lockedRect.size; i += sizeof(short)) {
// code
}
I am trying to do some Image processing on a UIImage using some EAGLView code from the GLImageProcessing sample from Apple. The sample code is configured to perform processing to a pre-installed image (Image.png). I am trying to modify the code so that it will accept a UIImage (or at least CGImage data) of my choice and process that instead. Problem is, the texture-loader method loadTexture() (below) seems to accept only C structures as parameters, and I have not been able to get it to accept a UIImage* or a CGImage as a parameter. Can someone give me a clue as how to bridge the gap so that I can pass my UIImage into the C-method?
------------ from Texture.h ---------------
#ifndef TEXTURE_H
#define TEXTURE_H
#include "Imaging.h"
void loadTexture(const char *name, Image *img, RendererInfo *renderer);
#endif /* TEXTURE_H */
----------------from Texture.m---------------------
#import <UIKit/UIKit.h>
#import "Texture.h"
static unsigned int nextPOT(unsigned int x)
{
x = x - 1;
x = x | (x >> 1);
x = x | (x >> 2);
x = x | (x >> 4);
x = x | (x >> 8);
x = x | (x >>16);
return x + 1;
}
// This is not a fully generalized image loader. It is an example of how to use
// CGImage to directly access decompressed image data. Only the most commonly
// used image formats are supported. It will be necessary to expand this code
// to account for other uses, for example cubemaps or compressed textures.
//
// If the image format is supported, this loader will Gen a OpenGL 2D texture object
// and upload texels from it, padding to POT if needed. For image processing purposes,
// border pixels are also replicated here to ensure proper filtering during e.g. blur.
//
// The caller of this function is responsible for deleting the GL texture object.
void loadTexture(const char *name, Image *img, RendererInfo *renderer)
{
GLuint texID = 0, components, x, y;
GLuint imgWide, imgHigh; // Real image size
GLuint rowBytes, rowPixels; // Image size padded by CGImage
GLuint POTWide, POTHigh; // Image size padded to next power of two
CGBitmapInfo info; // CGImage component layout info
CGColorSpaceModel colormodel; // CGImage colormodel (RGB, CMYK, paletted, etc)
GLenum internal, format;
GLubyte *pixels, *temp = NULL;
CGImageRef CGImage = [UIImage imageNamed:[NSString stringWithUTF8String:name]].CGImage;
rt_assert(CGImage);
if (!CGImage)
return;
// Parse CGImage info
info = CGImageGetBitmapInfo(CGImage); // CGImage may return pixels in RGBA, BGRA, or ARGB order
colormodel = CGColorSpaceGetModel(CGImageGetColorSpace(CGImage));
size_t bpp = CGImageGetBitsPerPixel(CGImage);
if (bpp < 8 || bpp > 32 || (colormodel != kCGColorSpaceModelMonochrome && colormodel != kCGColorSpaceModelRGB))
{
// This loader does not support all possible CGImage types, such as paletted images
CGImageRelease(CGImage);
return;
}
components = bpp>>3;
rowBytes = CGImageGetBytesPerRow(CGImage); // CGImage may pad rows
rowPixels = rowBytes / components;
imgWide = CGImageGetWidth(CGImage);
imgHigh = CGImageGetHeight(CGImage);
img->wide = rowPixels;
img->high = imgHigh;
img->s = (float)imgWide / rowPixels;
img->t = 1.0;
// Choose OpenGL format
switch(bpp)
{
default:
rt_assert(0 && "Unknown CGImage bpp");
case 32:
{
internal = GL_RGBA;
switch(info & kCGBitmapAlphaInfoMask)
{
case kCGImageAlphaPremultipliedFirst:
case kCGImageAlphaFirst:
case kCGImageAlphaNoneSkipFirst:
format = GL_BGRA;
break;
default:
format = GL_RGBA;
}
break;
}
case 24:
internal = format = GL_RGB;
break;
case 16:
internal = format = GL_LUMINANCE_ALPHA;
break;
case 8:
internal = format = GL_LUMINANCE;
break;
}
// Get a pointer to the uncompressed image data.
//
// This allows access to the original (possibly unpremultiplied) data, but any manipulation
// (such as scaling) has to be done manually. Contrast this with drawing the image
// into a CGBitmapContext, which allows scaling, but always forces premultiplication.
CFDataRef data = CGDataProviderCopyData(CGImageGetDataProvider(CGImage));
rt_assert(data);
pixels = (GLubyte *)CFDataGetBytePtr(data);
rt_assert(pixels);
// If the CGImage component layout isn't compatible with OpenGL, fix it.
// On the device, CGImage will generally return BGRA or RGBA.
// On the simulator, CGImage may return ARGB, depending on the file format.
if (format == GL_BGRA)
{
uint32_t *p = (uint32_t *)pixels;
int i, num = img->wide * img->high;
if ((info & kCGBitmapByteOrderMask) != kCGBitmapByteOrder32Host)
{
// Convert from ARGB to BGRA
for (i = 0; i < num; i++)
p[i] = (p[i] << 24) | ((p[i] & 0xFF00) << 8) | ((p[i] >> 8) & 0xFF00) | (p[i] >> 24);
}
// All current iPhoneOS devices support BGRA via an extension.
if (!renderer->extension[IMG_texture_format_BGRA8888])
{
format = GL_RGBA;
// Convert from BGRA to RGBA
for (i = 0; i < num; i++)
#if __LITTLE_ENDIAN__
p[i] = ((p[i] >> 16) & 0xFF) | (p[i] & 0xFF00FF00) | ((p[i] & 0xFF) << 16);
#else
p[i] = ((p[i] & 0xFF00) << 16) | (p[i] & 0xFF00FF) | ((p[i] >> 16) & 0xFF00);
#endif
}
}
// Determine if we need to pad this image to a power of two.
// There are multiple ways to deal with NPOT images on renderers that only support POT:
// 1) scale down the image to POT size. Loses quality.
// 2) pad up the image to POT size. Wastes memory.
// 3) slice the image into multiple POT textures. Requires more rendering logic.
//
// We are only dealing with a single image here, and pick 2) for simplicity.
//
// If you prefer 1), you can use CoreGraphics to scale the image into a CGBitmapContext.
POTWide = nextPOT(img->wide);
POTHigh = nextPOT(img->high);
if (!renderer->extension[APPLE_texture_2D_limited_npot] && (img->wide != POTWide || img->high != POTHigh))
{
GLuint dstBytes = POTWide * components;
GLubyte *temp = (GLubyte *)malloc(dstBytes * POTHigh);
for (y = 0; y < img->high; y++)
memcpy(&temp[y*dstBytes], &pixels[y*rowBytes], rowBytes);
img->s *= (float)img->wide/POTWide;
img->t *= (float)img->high/POTHigh;
img->wide = POTWide;
img->high = POTHigh;
pixels = temp;
rowBytes = dstBytes;
}
// For filters that sample texel neighborhoods (like blur), we must replicate
// the edge texels of the original input, to simulate CLAMP_TO_EDGE.
{
GLuint replicatew = MIN(MAX_FILTER_RADIUS, img->wide-imgWide);
GLuint replicateh = MIN(MAX_FILTER_RADIUS, img->high-imgHigh);
GLuint imgRow = imgWide * components;
for (y = 0; y < imgHigh; y++)
for (x = 0; x < replicatew; x++)
memcpy(&pixels[y*rowBytes+imgRow+x*components], &pixels[y*rowBytes+imgRow-components], components);
for (y = imgHigh; y < imgHigh+replicateh; y++)
memcpy(&pixels[y*rowBytes], &pixels[(imgHigh-1)*rowBytes], imgRow+replicatew*components);
}
if (img->wide <= renderer->maxTextureSize && img->high <= renderer->maxTextureSize)
{
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_2D, texID);
// Set filtering parameters appropriate for this application (image processing on screen-aligned quads.)
// Depending on your needs, you may prefer linear filtering, or mipmap generation.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, internal, img->wide, img->high, 0, format, GL_UNSIGNED_BYTE, pixels);
}
if (temp) free(temp);
CFRelease(data);
CGImageRelease(CGImage);
img->texID = texID;
}
Side Note: The above code is the original and unmodified sample code from Apple and does not generate any errors when compiled. However, when I try to modify the .h and .m to accept a UIImage* parameter (as below) the compiler generates the following error:"Error: expected declaration specifiers or "..." before UIImage"
----------Modified .h Code that generates the Compiler Error:-------------
void loadTexture(const char name, Image *img, RendererInfo *renderer, UIImage* newImage)
You are probably importing this .h into a .c somewhere. That tells the compiler to use C rather than Objective-C. UIKit.h (and it's many children) are in Objective-C and cannot be compiled by a C compiler.
You can rename all you .c files to .m, but what you really probably want is just to use CGImageRef and import CGImage.h. CoreGraphics is C-based. UIKit is Objective-C. There is no problem, if you want, for Texture.m to be in Objective-C. Just make sure that Texture.h is pure C. Alternatively (and I do this a lot with C++ code), you can make a Texture+C.h header that provides just the C-safe functions you want to expose. Import Texture.h in Objective-C code, and Texture+C.h in C code. Or name them the other way around if more convenient, with a Texture+ObjC.h.
It sounds like your file isn't importing the UIKit header.
WHy are you passing new image to loadTexture, instead of using loadTexture's own UImage loading to open the new image you want?
loadTexture:
void loadTexture(const char *name, Image *img, RendererInfo *renderer)
{
GLuint texID = 0, components, x, y;
GLuint imgWide, imgHigh; // Real image size
GLuint rowBytes, rowPixels; // Image size padded by CGImage
GLuint POTWide, POTHigh; // Image size padded to next power of two
CGBitmapInfo info; // CGImage component layout info
CGColorSpaceModel colormodel; // CGImage colormodel (RGB, CMYK, paletted, etc)
GLenum internal, format;
GLubyte *pixels, *temp = NULL;
[Why not have the following fetch your UIImage?]
CGImageRef CGImage = [UIImage imageNamed:[NSString stringWithUTF8String:name]].CGImage;
rt_assert(CGImage);
if (!CGImage)
return;