So I've set up my framework in a neat little system to wrap SDL, openGL and box2D all together for a 2D game.
Now how it works is that I create an object of "GameObject" class, specify a "source PNG", and then it automatically creates an openGL texture and a box2d body of the same dimensions.
Now I am worried about if I start needing to render many different textures on screen.
Is it possible to load in all my sprite sheets at run time, and then group them all together into one texture? If so, how? And what would be a good way to implement it (so that I wouldn't have to manually be specifying any parameters or anything).
The reason I want to do it at run time and not pre-done is so that I can easily load together all (or most) of the tiles, enemies etc.. of a certain level into this one texture, because every level won't have the same enemies. It'd also make the whole creating art process easier.
There are likely some libraries that already exist for creating texture atlases (optimal packing is a nontrivial problem) and converting old texture coordinates to the new ones.
However, if you want to do it yourself, you probably would do something like this:
Load all textures from disk (your "source PNG") and retrieve the raw pixel data buffer,
If necessary, convert all source textures into the same pixel format,
Create a new texture big enough to hold all the existing textures, along with a corresponding buffer to hold the pixel data
"Blit" the pixel data from the source images into the new buffer at a given offset (see below)
Create a texture as normal using the new buffer's data.
While doing this, determine the mapping from "old" texture coordinates into the "new" texture coordinates (should be a simple matter of recording the offsets for each element of the texture atlas and doing a quick transform). It would probably also be pretty easy to do it inside a pixel shader, but some profiling would be required to see if the overhead of passing the extra parameters is worth it.
Obviously you also want to check to make sure you are not doing something silly like loading the same texture into the atlas twice, but that's a concern that's outside this procedure.
To "blit" (copy) from the source image to the target image you'd do something like this (assuming you're copying a 128x128 texture into a 512x512 atlas texture, starting at (128, 0) on the target):
unsigned char* source = new unsigned char[ 128 * 128 * 4 ]; // in reality, comes from your texture loader
unsigned char* target = new unsigned char[ 512 * 512 * 4 ];
int targetX = 128;
int targetY = 0;
for(int sourceY = 0; sourceY < 128; ++sourceY) {
for(int sourceX = 0; sourceX < 128; ++sourceX) {
int from = (sourceY * 128 * 4) + (sourceX * 4); // 4 bytes per pixel (assuming RGBA)
int to = ((targetY + sourceY) * 512 * 4) + ((targetX + sourceX) * 4); // same format as source
for(int channel = 0; channel < 4; ++channel) {
target[to + channel] = source[from + channel];
}
}
}
This is a very simple brute force implementation: there are much faster, more succinct and more clever ways to copy an array, but the idea is that you are basically copying the contents of the source texture into the target texture at a given X and Y offset. In the end, you will have created a new texture which contains the old textures in it.
If the indexing math doesn't make sense to you, think about how a 2D array is actually indexed inside a 1D space (such as computer memory).
Please forgive any bugs. This isn't production code but instead something I wrote without checking if it compiles or runs.
Since you're using SDL, I should mention that it has a nice function that might be able to help you: SDL_BlitSurface. You can create an SDL_Surface entirely within SDL and simply use SDL_BlitSurface to copy your source surfaces into it, then convert the atlas surface into a GL texture.
It will take care of all the math, and can also do a format conversion for you on the fly.
Related
In my OpenGL program, I'm loading a 24BPP image with the width of 501. The GL_UNPACK_ALINGMENT parameter is set to 4. They write it shouldn't work because the size of each of the rows which are being uploaded (501*3 = 1503) cannot be divided by 4. However, I can see a normal texture without artifacs when displaying it.
So my code works. I'm considering why to understand this fully and prevent the whole project from getting bugged.
Maybe (?) it works because I'm not just calling glTexImage2D. Instead, at first I'm creating a proper (with dimensions which are powers of two) blank texture, then uploading pixels with glTexSubImage2D.
EDIT:
But do you think it does a sense to write some code like that?
// w - the width of the image
// depth - the depth of the image
bool change_alignment = false;
if (depth != 4 && !is_divisible(w*depth)) // *
{
change_alignment = true;
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
}
// ... now use glTexImage2D
if (change_alingment) glPixelStorei(GL_UNPACK_ALIGNMENT, 4); // set to default
// * - of course we don't even need such a function
// but I wanted to make the code as clear as possible
Hope it should prevent the application from crashing or malfunction?
It depends on where your image data is coming from.
The Windows BMP format, for example, enforces a 4-byte row alignment. Indeed, formats like this are exactly why OpenGL has a row-alignment field: because some image formats enforce a row alignment.
So how correct it is to use a 4-byte row alignment on your data depends entirely on how your data is aligned in memory. Some image loaders will automatically align to 4 bytes. And some will not.
I have an array of grayscale pixel values (floats as a fraction of 1) that I need to display, and then possibly save. The values just came from computations, so I have no libraries currently installed or anything. I've been trying to figure out the CImage libraries, but can't make much sense of what I need to do to visualize this data. Any help would be appreciated!
Thank you.
One possible approach which I've used with some success is to take D3DX's texture functions to create a Direct3D texture and fill it. There is some overhead in starting up D3D, but it provides you with multi-thread-able texture creation and built-in-ish viewing, as well as saving to files without much more fuss.
If you're not interested in using D3D(X), some of the specifics here won't be useful, but the generator should help figure out how to output data for any other library.
For example, assuming an existing D3D9 device pDevice and a noise generator (or other texture data source) pGen:
IDirect3DTexture9 * pTexture = nullptr;
D3DXCreateTexture(pDevice, 255, 255, 0, 0, D3DFMT_R8G8B8, D3DPOOL_DEFAULT, &pTexture);
D3DXFillTexture(pTexture, &texFill, pGen);
D3DXSaveTexture("texture.png", D3DXIFF_PNG, pTexture, NULL);
The generator function:
VOID WINAPI texFill(
D3DXVECTOR4* pOut,
CONST D3DXVECTOR2* pTexCoord,
CONST D3DXVECTOR2* pTexelSize,
LPVOID pData,
) {
// For a prefilled array:
float * pArray = (float *)pData;
float initial = pArray[(pTexCoord->y*255)+pTexCoord->x];
// For a generator object:
Generator * pGen = (Generator*)pData; // passed in as the third param to fill
float initial = pGen->GetPixel(pTexCoord->x, pTexCoord->y);
pOut->x = pOut->y = pOut->z = (initial * 255);
pOut->w = 255; // set alpha to opaque
}
D3DXCreateTexture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb172800%28v=vs.85%29.aspx
D3DXFillTexture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb172833(v=vs.85).aspx
D3DXSaveTextureToFile: http://msdn.microsoft.com/en-us/library/windows/desktop/bb205433(v=vs.85).aspx
Corresponding functions are available for volume/3D textures. As they are already set up for D3D, you can simply render the texture to a flat quad to view, or use as a source in whatever graphical application you may want.
So long as your generator is thread-safe, you can run the create/fill/save in one thread per texture, and generate multiple slices or frames simultaneously.
I found that the best solution for this problem was to use the SFML library (www.sfml-dev.org). Very simple to use, but must be compiled from source if you want to use it with VS2010.
You can use the PNM image format without any libraries whatsoever. (The format itself is trivial). However it's pretty archaic and you'll have to have an image viewer that supports it. IvanView, for example, supports it on Windows.
I'm looking for a way to get a buffer of image data into a PNG file, and a way to get a PNG file into a buffer.
There are just these two things I want to do.
It would be a dead simple wrapper that uses png.h. Well, not exactly dead simple because of the horribly complex libpng API, but the concept of it is.
I tried DevIL before. It is much easier to use than libpng. Still, I have had issues with it. Also, DevIL does too much. I only need lean and mean basic PNG format support, not 20 other formats as well.
Then I find this page. I praised the Pixel Fairy and the Almighty Google for giving me an implementation on a silver platter... Then it turns out this screws up the image: in the processed image every fourth pixel in each scanline goes missing. I am fairly certain from reading the source that this is not meant to happen! It's supposed to zero out red and set green to blue. That didn't happen either.
I have also tried png++. The issue I had with it is that I couldn't get data out of a PNG in a format compatible for loading into OpenGL, I would have to construct another buffer. It just looked ugly, but I will definitely try png++ again before I even think about giving DevIL another shot. Because png++ worked, at least. It's also got the header-only aspect going for it. Still, it did produce a bunch of compiler warnings.
Are there any other contenders? Anybody who has worked with directly using libpng would know how to make what I am asking for: one function that takes a filename and fills a 32-bpp buffer and sets two resolution integers; one function that takes a 32-bpp buffer, two resolution integers, and a filename.
Update-edit: I found this. Might be something there.
This tutorial seems to have what you want.
From the link:
//Here's one of the pointers we've defined in the error handler section:
//Array of row pointers. One for every row.
rowPtrs = new png_bytep[imgHeight];
//Alocate a buffer with enough space.
//(Don't use the stack, these blocks get big easilly)
//This pointer was also defined in the error handling section, so we can clean it up on error.
data = new char[imgWidth * imgHeight * bitdepth * channels / 8];
//This is the length in bytes, of one row.
const unsigned int stride = imgWidth * bitdepth * channels / 8;
//A little for-loop here to set all the row pointers to the starting
//Adresses for every row in the buffer
for (size_t i = 0; i < imgHeight; i++) {
//Set the pointer to the data pointer + i times the row stride.
//Notice that the row order is reversed with q.
//This is how at least OpenGL expects it,
//and how many other image loaders present the data.
png_uint_32 q = (imgHeight- i - 1) * stride;
rowPtrs[i] = (png_bytep)data + q;
}
//And here it is! The actuall reading of the image!
//Read the imagedata and write it to the adresses pointed to
//by rowptrs (in other words: our image databuffer)
png_read_image(pngPtr, rowPtrs);
I'd add CImg to the list of options. While it is an image library the API is not so high level as most (devil/imagemagick/freeimage/GIL). It is also header only.
The image class has simple width height and data members with public access. Under the hood it uses libpng (if you tell it to with preprocessor directive). The data is cast to whatever type you chose for the templated image object.
CImg<uint8_t>myRGBA("fname.png");
myRGBA._data[0] = 255; //set red value of first pixel
Sean Barrett has written two public-domain files for PNG image reading/writing.
I have a bitmap image that is currently represented as a byte array (could be YCrCb or RGB). Is there a function build in to OpenGL that will allow me to looks at individual pixels from this byte array?
I know that there is the function glReadPixels but I don't need to be reading from the frame buffer if I've already got the data.
If not, is there an alternative way to do this in C++?
OpenGL is a drawing API, not some kind of all purpose graphics library – The 'L' in OpenGL means should be read as Layer, not library.
That being said: If you know the dimensions of the byte array, and the data layout, then it is trivial to fetch individual pixels.
pixel_at(x,y) = data_byte_array[row_stride * y + pixel_stride * x]
in a tightly packed format
pixel_stride = bytes_per_pixel
row_stride = width * pixel_stride
I've been working on some sound processing code and now I'm doing some visualizations. I finished making a spectrogram spectrogram, but how I am drawing it is too slow.
I'm using OpenGL to do 2D drawing, which has made searching for help more difficult. Also I am very new to OpenGL, so I don't know the standard way things are done.
I am storing the r,g,b values for each pixel in a large matrix.
Each time I get a small sound segment, I process it and convert it to column of pixels. Everything is shifted to the left 1 pixel, and the new line is put at the end.
Each time I redraw, I am looping through setting the color and drawing each pixel individually, which seems like a horribly inefficient way to do this.
Is there a better way to do this? Is there some method for simply shifting a bunch of pixels over?
They are many ways to improve your drawing speed.
The simplest would be to allocate a an RGB texture that you will draw using a screen aligned texture quad.
Each time that you want to draw a new line you can use glTexSubImage2d to a load a new subset of the texture and then you redraw the quad.
Are you perhaps passing a lot more data to the graphics card than you have pixels? This could happen if your FFT size is much larger than the height of the drawing area or the number of spectral lines is a lot more than its width. If so, it's possible that the bottle neck could be passing too much data across the bus. Try reducing the number of spectral lines by either averaging them or picking (taking the maximum in each bin for a set of consecutive lines).
GL_POINTS, VBO, GL_STREAM_DRAW.
I know this is an old question, but . . .
Use a circular buffer to store the pixels, and then simply call glDrawPixels twice with the appropriate offsets. Something like this untested C:
#define SIZE_X 800
#define SIZE_Y 600
unsigned char pixels[SIZE_Y][SIZE_X*2][3];
int start = 0;
void add_line(const unsigned char line[SIZE_Y][1][3]) {
int i,j,coord=(start+SIZE_X)%(2*SIZE_X);
for (i=0;i<SIZE_Y;++i) for (j=0;j<3;++j) pixels[i][coord][j] = line[i][0][j];
start = (start+1) % (2*SIZE_X);
}
void draw(void) {
int w;
w = 2*SIZE_X-start;
if (w!=0) glDrawPixels(w,SIZE_Y,GL_RGB,GL_UNSIGNED_BYTE,3*sizeof(unsigned char)*SIZE_Y*start+pixels);
w = SIZE_X - w;
if (w!=0) glDrawPixels(SIZE_X,SIZE_Y,GL_RGB,GL_UNSIGNED_BYTE,pixels);
}