How to draw screenshot captured from glReadPixels to wxWidgets dialog/panel - c++

I have an OpenGL window, and a wxWidget dialog. I want to mirror the OpenGL to the dialog. So what I intend to do is:
Capture the screenshot of the opengl
Display it onto the wxwidgets dialog.
Any idea?
Update: This is how I currently use glReadPixels (I also temporarily use FreeImage to save to BMP file, but I expect the file saving to be removed if there's a way to channel it directly to the wxImage)
// Make the BYTE array, factor of 3 because it's RBG.
BYTE* pixels = new BYTE[ 3 * width * height];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, pixels);
// Convert to FreeImage format & save to file
FIBITMAP* image = FreeImage_ConvertFromRawBits(pixels, width, height, 3 * width, 24, 0x0000FF, 0xFF0000, 0x00FF00, false);
FreeImage_Save(FIF_BMP, image, "C:/test.bmp", 0);
// Free memory
delete image;
delete pixels;

// Add Image Support for all types
wxInitAllImageHandlers();
BYTE* pixels = new BYTE[ 3 * width * height];
glReadPixels(0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, pixels);
// width height pixels alpha
wxImage img(with, height, pixels, NULL); // I am not sure if NULL is permitted on the alpha channel, but you can test that yourself :).
// Second method:
wxImage img(width, heiht, true);
img.SetData(pixels);
You can now use the image for displaying, saving as jpg png bmp whatever you like. For just displaying in a dialog, you don't need to save it to the harddisc though but of course, you can.
Just create the image on the heap then.
http://docs.wxwidgets.org/stable/wx_wximage.html#wximagector
Hope it helps

Related

Saving a single color OpenGL texture to a file results in a 64x64 black box

This code fragment is meant to create a GL texture with a single color, then save the raw pixel data to disk. I then convert that to PNG using ffmpeg. I have tried multiple ways of generating the texture, and multiple ways of saving the texture data, but the result is always the same - a 1920x1080 image with a 64x64 black box in the corner. What I expected was a 1920x1080 image of a single color.
What am I doing wrong?
Conversion command:
ffmpeg -pix_fmt rgba -s 1920x1080 -i texture.raw -f image2 output.png
Code:
gpu::gles2::GLES2Interface* gl = GetContextProvider()->ContextGL();
GLuint texture;
gl->GenTextures(1, &texture);
gl->BindTexture(GL_TEXTURE_2D, texture);
int width = 1920;
int height = 1080;
std::vector<unsigned char> data(width * height * 4, 0);
for (size_t i = 2; i < data.size(); i += 4) {
data[i] = 255; // blue channel
}
gl->TexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data.data());
gl->TexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
gl->TexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
std::vector<unsigned char> buffer(width * height * 4);
gl->ReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer.data());
std::ofstream file("texture.raw", std::ios::binary);
file.write(reinterpret_cast<char*>(buffer.data()), buffer.size());
file.close();
Take a look at the description for glReadPixels from the main documentation here: https://registry.khronos.org/OpenGL-Refpages/gl4/html/glReadPixels.xhtml.
Essentially glReadPixels is for getting the pixels from the current frame buffer, not from the currently bound GL_TEXTURE_2D. I can't 100% confidently answer without more code and context, but it looks like the code you have there is before anything is actually rendered to the frame buffer and you're only setting things up. It's most likely that you get a black box because the data getting saved to the buffer isn't valid.
Hope that helps.

Loading opengl texture using Boost.GIL

I wrote a simple app that load model using OpenGL, Assimp and Boost.GIL.
My model contains a PNG texture. When I load it using GIL and render it through OPENGL I got a wrong result. Thank of powel of codeXL, I found my texture loaded in OpenglGL is completely different from the image itself.
Here is a similar question and I followed its steps but still got same mistake.
Here are my codes:
// --------- image loading
std::experimental::filesystem::path path(pathstr);
gil::rgb8_image_t img;
if (path.extension() == ".jpg" || path.extension() == ".jpeg" || path.extension() == ".png")
{
if (path.extension() == ".png")
gil::png_read_and_convert_image(path.string(), img);
else
gil::jpeg_read_and_convert_image(path.string(), img);
_width = static_cast<int>(img.width());
_height = static_cast<int>(img.height());
typedef decltype(img)::value_type pixel;
auto srcView = gil::view(img);
//auto view = gil::interleaved_view(
// img.width(), img.height(), &*gil::view(img).pixels(), img.width() * sizeof pixel);
auto pixeldata = new pixel[_width * _height];
auto dstView = gil::interleaved_view(
img.width(), img.height(), pixeldata, img.width() * sizeof pixel);
gil::copy_pixels(srcView, dstView);
}
// ---------- texture loading
{
glBindTexture(GL_TEXTURE_2D, handle());
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB,
image.width(), image.height(),
0, GL_RGB, GL_UNSIGNED_BYTE,
reinterpret_cast<const void*>(image.data()));
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
And my texture is:
When it runs, my codeXL debugger reports me that the texture became:
all other textures of this model went wrong too.
Technically this is a FAQ, asked already several times. Essentially you're running into an alignment issue. By default (you can change it) OpenGL expects image rows to be aligned on 4 byte boundaries. If your image data doesn't match this, you get this skewed result. Adding a call to glPixelStorei(GL_UNPACK_ALIGNMENT, 1); right before the call to glTexImage… will do the trick for you. Of course you should retrieve the actual alignment from the image metadata.
The image being "upside down" is caused by OpenGL putting the origin of textures into the lower left (if all transformation matrices are left at default or have positive determinant). That is unlike most image file formats (but not all) which have it in the upper left. Just flip the vertical texture coordinate and you're golden.

Create texture from sprite_sheet coordinates

I have a sprite_sheet (example sheet):
I loaded as vector data and I need to make a new texture from a specified area. Like this:
const std::vector <char> image_data = LoadPNG("sprite_sheet.png"); // This works.
glMakeTexture([0, 0], [50, 50], configs, image_data) //to display only one sqm.
But i don't know how, 'couse mostly GL functions only works in full area, like glTexImage2D.
// Only have width and height, the bottomLeft coords is 0, 0 -- I want to change this.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, &(_sprite_sheet.data[0]));
Have a way to do that without load the full sprite_sheet as a texture?
OBS: I'm using picoPNG to decode PNG and I CAN load png, but not make a texture from specified area.
Because you show no code I assume that:
char *data; // data of 8-bit per sample RGBA image
int w, h; // size of the image
int sx, sy, sw, sh; // position and size of area you want to crop.
glTexImage2D does support regions-of-interest. You do it as follows:
glPixelStorei(GL_UNPACK_ROW_LENGTH, w);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, sx);
glPixelStorei(GL_UNPACK_SKIP_ROWS, sy);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1); // LodePNG tightly packs everything
glTexImage2D(GL_TEXTURE_2D, level, internalFormat, sw, sh, border, format, type, data);

How i can convert unsigned char* to image file (like jpg) in c++?

I have a opengl application that create one texture in format unsigned char*, and i gotta save this texture in one image file, but a don't know how to do. can someone help me?
this is my creation of this texture:
static unsigned char* pDepthTexBuf;
and this is my code that use this texture:
glBindTexture(GL_TEXTURE_2D, depthTexID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, texWidth, texHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, pDepthTexBuf);
but how i can save this texture "pDepthTexBuf" in image file?
This is a very complicated question... I suggest referring to other public examples, like this one: http://www.andrewewhite.net/wordpress/2008/09/02/very-simple-jpeg-writer-in-c-c/
Basically, you need to integrate an image library, and then use whatever hooks it supports to save your data.
The simplest approach is probably to use a library like OpenCV, which has some very easy to use mechanisms for turning byte arrays of RGB data into image files.
You can see an example of reading the OpenGL image buffer and storing it as a PNG file here. Saving a JPG may be as simple as changing the extension of the output file.
// Create an OpenCV matrix of the appropriate size and depth
cv::Mat img(windowSize.y, windowSize.x, CV_8UC3);
glPixelStorei(GL_PACK_ALIGNMENT, (img.step & 3) ? 1 : 4);
glPixelStorei(GL_PACK_ROW_LENGTH, img.step / img.elemSize());
// Fetch the pixels as BGR byte values
glReadPixels(0, 0, img.cols, img.rows, GL_BGR, GL_UNSIGNED_BYTE, img.data);
// Image files use Y = down, so we need to flip the image on the X axis
cv::flip(img, img, 0);
static int counter = 0;
static char buffer[128];
sprintf(buffer, "screenshot%05i.png", counter++);
// write the image file
bool success = cv::imwrite(buffer, img);
if (!success) {
throw std::runtime_error("Failed to write image");
}

Loading 16-bit heightmaps with SOIL

I am trying to load a height map for terrain using SOIL. I use the following code:
unsigned char* image = SOIL_load_image(fname.c_str(), &width, &height, 0, SOIL_LOAD_L);
glBindTexture(GL_TEXTURE_2D, name);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, image);
However, my terrain looks steppy because the height is expressed with 8 bits only. How can I load 16-bit height map with SOIL? Or should I use another image library for this task?
As Andon M. Coleman recommended, I used raw binary format to store 16-bit height data and got the required result.