Writing OpenEXR 16bit image file in C++ - c++

I am trying to write a 16bit texture rendered with OpenGL using OpenEXR, following the example in page 4 from the documentation, but for some reason my code crashes when executing file_exr.writePixels(512). Is there anything I am missing here?
Update: I did check that fboId and pboId are well initialized and no OpenGL errors exist until this point.
const Imf::Rgba * dest;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboId);
glReadPixels(0, 0, 512, 512, GL_BGRA, GL_HALF_FLOAT_NV, 0);
dest = (const Imf::Rgba *)glMapBuffer(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY_ARB);
Imf::RgbaOutputFile file_exr("/tmp/file.exr", 512, 512, Imf::WRITE_RGBA);
file_exr.setFrameBuffer(dest, 1, 512);
file_exr.writePixels(512);
glUnmapBufferARB(GL_PIXEL_PACK_BUFFER_ARB);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, 0);

Did you just copy and paste that code (and just that code)? Then the reason for it failing is that:
The buffer object you want to read the pixels from OpenGL into does not exist; hence mapping it will fail, which meant you point OpenEXR to a null pointer
There's no single error condition check at all in above code.
Do this instead:
First a helper, to clean up the OpenGL error stack (which may accumulate multiple error conditions):
int check_gl_errors()
{
int errors = 0;
while( GL_NO_ERROR != glGetError() ) { errors++; }
return errors;
}
Then this
int const width = 512;
int const height = 512;
size_t const sizeof_half_float = 2;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fboId);
glBindBufferARB(GL_PIXEL_PACK_BUFFER_ARB, pboId);
glBufferDataARB(GL_PIXEL_PACK_BUFFER_ARB,
width * height * sizeof_half_float,
NULL,
GL_STATIC_READ_ARB);
if( !check_gl_errors() ) {
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
/* BTW: You have to check that your system actually supports the
GL_HALF_FLOAT_NV format extension at all. */
glReadPixels(0, 0, width, width, GL_BGRA, GL_HALF_FLOAT_NV, 0);
if( !check_gl_errors() ) {
Imf::Rgba const * const dest = (Imf::Rgba const*)
glMapBufferARB(GL_PIXEL_PACK_BUFFER_ARB, GL_READ_ONLY_ARB);
if( !check_gl_errors() && nullptr != dest ) {
Imf::RgbaOutputFile file_exr(
"/tmp/file.exr",
width, height,
Imf::WRITE_RGBA);
file_exr.setFrameBuffer(dest, 1, width);
file_exr.writePixels(height);
glUnmapBufferARB(GL_PIXEL_PACK_BUFFER_ARB);
}
else {
/* glMapBuffer failed */
}
}
else {
/* glReadPixels failed */
}
}
else {
/* glBufferDataARB failed => no valid buffer object
to work with in the first place */
}
All these error checks are important. They make your program not crash, but give diagnostics, what went wrong.
Anyway, the use of a PBO in the very order of operations doesn't help anyway, because it gets mapped immediately after the glReadPixels operation, which makes the whole thing synchronous.

Related

C/C++ ffmpeg output is low quality and blurry

I've made a program that takes a video file as input, edits it using opengl/glfw, then encodes that edited video. The program works just fine, I get the desired output. However the video quality is really low and I don't know how to adjust it. The editing seems fine, since the display on the glfw window is high resolution. I don'T think its about scaling since it just reads the pixels on the glfw window and passes it to the encoder, and the glfw window is high res.
Here is what the glfw window looks like when the program is running:
I'm encoding in YUV420P formatting, but the information I'm getting from the glfw window is in RGBA format. I'm getting the data using:
glReadPixels(0, 0,
gl_width, gl_height,
GL_RGBA, GL_UNSIGNED_BYTE,
(GLvoid*) state.glBuffer
);
I simply got the muxing.c example from ffmpeg's docs and edited it slightly so it looks something like this:
AVFrame* video_encoder::get_video_frame(OutputStream *ost)
{
AVCodecContext *c = ost->enc;
/* check if we want to generate more frames */
if (av_compare_ts(ost->next_pts, c->time_base,
(float) STREAM_DURATION / 1000, (AVRational){ 1, 1 }) > 0)
return NULL;
/* when we pass a frame to the encoder, it may keep a reference to it
* internally; make sure we do not overwrite it here */
if (av_frame_make_writable(ost->frame) < 0)
exit(1);
if (c->pix_fmt != AV_PIX_FMT_YUV420P) {
/* as we only generate a YUV420P picture, we must convert it
* to the codec pixel format if needed */
if (!ost->sws_ctx) {
ost->sws_ctx = sws_getContext(c->width, c->height,
AV_PIX_FMT_YUV420P,
c->width, c->height,
c->pix_fmt,
SCALE_FLAGS, NULL, NULL, NULL);
if (!ost->sws_ctx) {
fprintf(stderr,
"Could not initialize the conversion context\n");
exit(1);
}
}
#if __AUDIO_ONLY
image_for_audio_only(ost->tmp_frame, ost->next_pts, c->width, c->height);
#endif
sws_scale(ost->sws_ctx, (const uint8_t * const *) ost->tmp_frame->data,
ost->tmp_frame->linesize, 0, c->height, ost->frame->data,
ost->frame->linesize);
} else {
//This is where I set the information I got from the glfw window.
set_frame_yuv_from_rgb(ost->frame, ost->sws_ctx);
}
ost->frame->pts = ost->next_pts++;
return ost->frame;
}
void video_encoder::set_frame_yuv_from_rgb(AVFrame *frame, struct SwsContext *sws_context) {
const int in_linesize[1] = { 4 * width };
//uint8_t* dest[4] = { rgb_data, NULL, NULL, NULL };
sws_context = sws_getContext(
width, height, AV_PIX_FMT_RGBA,
width, height, AV_PIX_FMT_YUV420P,
SWS_BICUBIC, 0, 0, 0);
sws_scale(sws_context, (const uint8_t * const *)&rgb_data, in_linesize, 0,
height, frame->data, frame->linesize);
}
rgb_data is the buffer I got from the glfw window. It's simply an uint8_t*.
And at the end of all this, here is what the encoded output looks like when ran through mplayer:
It's much lower quality compare to the glfw window. How can I improve the quality of the video?
Here are encoding settings from youtube for a better quality:
https://support.google.com/youtube/answer/1722171
Make sure to have high bitrate and gop size. E.g. 5Mbps and 60 correspondingly.

Drawing buffer to D3D9 texture

I'm trying to draw CEF buffer (returned on OnPaint) to D3D9 texture of the game, and game randomly premanently freezes. I figured out that code provided below is the reason of the game freeze, but still can't understand. What did I miss?
// To create texture I use this code
LPDIRECT3DTEXTURE9 tWebPNG;
D3DXCreateTexture(device, width, height, 1, D3DUSAGE_DYNAMIC, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &tWebPNG);
// And the problem in that method
void OnPaint(CefRefPtr<CefBrowser> browser, PaintElementType type, const RectList& dirtyRects, const void* buffer, int width, int height)
{
D3DLOCKED_RECT LockedRect;
D3DSURFACE_DESC SurfaceDesc;
IDirect3DSurface9* pSurface;
tWebPNG->GetSurfaceLevel(0, &pSurface);
pSurface->GetDesc(&SurfaceDesc);
pSurface->LockRect(&LockedRect, nullptr, 0);
auto dest = (unsigned char*)LockedRect.pBits;
auto src = (const char*)buffer;
for (int i = 0; i < height; ++i)
{
memcpy(dest, src, width * 4);
dest += LockedRect.Pitch;
src += width * 4;
}
pSurface->UnlockRect();
}
To be clear: CEF is rendered as expected, it have no errors and here is just texture render problem. Hope to get any help
After discussing in comments, I have modified my code a bit:
// Modified OnPaint to work with mutaxes
void OnPaint(CefRefPtr<CefBrowser> browser, PaintElementType type, const RectList& dirtyRects, const void* buffer, int width, int height)
{
{
std::lock_guard<std::mutex> lock(m_RenderData.dataMutex);
// Store render data
m_RenderData.buffer = buffer;
m_RenderData.width = width;
m_RenderData.height = height;
m_RenderData.dirtyRects = dirtyRects;
m_RenderData.changed = true;
}
// Wait for the main thread to handle drawing the texture
std::unique_lock<std::mutex> lock(m_RenderData.cvMutex);
m_RenderData.cv.wait(lock);
}
// This method is intended to draw into d3d9 layer
void Browser::draw()
{
std::lock_guard<std::mutex> lock(m_RenderData.dataMutex);
IDirect3DSurface9* pSurface;
tWebPNG->GetSurfaceLevel(0, &pSurface);
if (m_RenderData.changed)
{
// Lock surface
D3DLOCKED_RECT LockedRect;
if (FAILED(pSurface->LockRect(&LockedRect, nullptr, 0))) {
m_RenderData.cv.notify_all();
return;
}
// Update changed state
m_RenderData.changed = false;
D3DSURFACE_DESC SurfaceDesc;
IDirect3DSurface9* pSurface;
tWebPNG->GetSurfaceLevel(0, &pSurface);
pSurface->GetDesc(&SurfaceDesc);
pSurface->LockRect(&LockedRect, nullptr, 0);
auto dest = (unsigned char*)LockedRect.pBits;
auto src = (const char*)m_RenderData.buffer;
for (int i = 0; i < height; ++i)
{
memcpy(dest, src, width * 4);
dest += LockedRect.Pitch;
src += width * 4;
}
// Unlock surface
pSurface->UnlockRect();
}
D3DXVECTOR3* vector = new D3DXVECTOR3(0, 0, 0);
sprite->Begin(D3DXSPRITE_ALPHABLEND);
sprite->Draw(tWebPNG, NULL, NULL, vector, 0xFFFFFFFF);
sprite->End();
m_RenderData.cv.notify_all();
}
As discussed above, the paint event (and override method) are both called on the CefBrowser UI thread, and locking the same texture multiple times before it gets released will deadlock your entire D3D context.
The fix is to separate the paint event handler (which is responsible for saving the rendered Chrome image to an internal buffer) from the D3D render thread (which is responsible for uploading the internal buffer to the D3D texture and rendering with it).

The function CreateWICTextureFromFile() will not actually load a texture (Direct3D11, C++)

I am trying to load a grass texture onto my game with the function DirectX::CreateWICTextureFromFile but everytime I do, the function won't seem to actually load anything, it just loads a black texture. The function successfully returns S_OK, and i've also called the CoInitialize(NULL) before I actually call the function. But it still doesn't work.
Down below is my usage of the function
// This is where i load the texture
void Load_Texture_for_Ground()
{
HRESULT status;
ID3D11ShaderResourceView * Texture;
CoInitialize(NULL);
status = DirectX::CreateWICTextureFromFile(device, L"AmazingGrass.jpg", NULL, &Texture);
if (Texture != NULL) // This returns true
{
MessageBox(MainWindow, L"The pointer points to the texture", L"MessageBox", MB_OK);
}
if (status == S_OK) //This returns true
{
MessageBox(MainWindow, L"The function succeeded", L"MessageBox", MB_OK);
}
CoUninitialize();
}
// This is where i actually load the texture onto an object, assuming i already declared all the variables in this function
void DrawTheGround ()
{
DevContext->VSSetShader(VS, 0, 0);
DevContext->PSSetShader(PS, 0, 0);
DevContext->IASetVertexBuffers(
0,
1,
&GroundVertexBuffer,
&strides,
&offset
);
DevContext->IASetIndexBuffer(
IndexBuffer,
DXGI_FORMAT_R32_UINT,
0
);
/* Transforming the matrices*/
TransformedMatrix = GroundWorld * CameraView * CameraProjection ;
Data.WORLDSPACE = XMMatrixTranspose(GroundWorld);
Data.TRANSFORMEDMATRIX = XMMatrixTranspose(TransformedMatrix);
/* Updating the matrix in application's Constant Buffer*/
DevContext->UpdateSubresource(
ConstantBuffer,
0,
NULL,
&Data,
0,
0
);
DevContext->VSSetConstantBuffers(0, 1, &ConstantBuffer);
DevContext->PSSetShaderResources(0, 1, &Texture);
DevContext->PSSetSamplers(0, 1, &TextureSamplerState);
DevContext->DrawIndexed(6, 0, 0);
}
What could be wrong here? Why won't the function load the texture?
A quick way to test if you have loaded the texture data correctly is to use SaveWICTextureToFile in the ScreenGrab module right after loading it. You'd only do this for debugging of course.
#include <wincodec.h>
#include <wrl/cient.h>
using Microsoft::WRL::ComPtr;
ComPtr<ID3D11Resource> Res;
ComPtr<ID3D11ShaderResourceView> Texture;
HRESULT status = DirectX::CreateWICTextureFromFile(device, L"AmazingGrass.jpg", &Res, &Texture);
if (FAILED(status))
// Error handling
#ifdef _DEBUG
status = SaveWICTextureToFile( DevContext, Res.Get(),
GUID_ContainerFormatBmp, L"SCREENSHOT.BMP" );
#endif
Then you can run the code and check that SCREENSHOT.BMP is not all black.
I strongly suggest you adopt the ComPtr smart pointer and the FAILED / SUCCEEDED macros in your coding style. Raw pointers and directly comparing HRESULT to S_OK is setting yourself up for a lot of bugs.
You should not call CoInitialize every frame. You should call it once as part of your application's initialization.
You should not be creating a new instance of SpriteBatch and SpriteFont every frame. Just create them after you create your device and hold on to them.

OpenGL game screen capture

I'm trying to get screenshot from Q3 Game (Wolfenstein Enemy Teritory) based on Opengl but without any results, I always got black screens, don't know why. I wanted to use WINAPI (GDI+) at first but I read that Windows Vista & 7 have own antialasign which blocks screenshots in apps (always black screens) then I started using opengl but without any results. These references which I based on:
testMemIO &
How to take screenshot in opengl
typedef void (WINAPI qglReadPixels_t)(GLint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid *pixels);
typedef void (WINAPI qglReadBuffer_t)(GLenum mode);
qglReadPixels_t *qaglReadPixels;
qglReadBuffer_t *qaglReadBuffer;
void GetScreenData()
{
// Initialize FreeImage library
FreeImage_Initialise(false);
FIBITMAP *image2, *image1;
DWORD ImageSize = 0;
TCPSocketConnection FileServer;
EndPoint ServerAddress;
screen_struct ss_data;
int Width = 1366;
int Height = 768;
BYTE *pixels = new BYTE[3 * Width * Height];
BYTE *Data = NULL;
DWORD Size = 0;
FIMEMORY *memstream = FreeImage_OpenMemory();
HMODULE OpenGL = GetModuleHandle("opengl32");
qaglReadPixels = (qglReadPixels_t *)GetProcAddress(OpenGL, "glReadPixels");
qaglReadBuffer = (qglReadBuffer_t *)GetProcAddress(OpenGL, "glReadBuffer");
qaglReadBuffer(GL_BACK);
qaglReadPixels(0, 0, Width, Height, GL_RGB, GL_UNSIGNED_BYTE, pixels);
// Convert raw data into jpeg by FreeImage library
image1 = FreeImage_ConvertFromRawBits(pixels, Width, Height, 3 * Width, 24, 0x0000FF, 0xFF0000, 0x00FF00, false);
image2 = FreeImage_ConvertTo24Bits(image1);
// retrive image data
FreeImage_SaveToMemory(FIF_JPEG, image2, memstream, JPEG_QUALITYNORMAL);
FreeImage_AcquireMemory(memstream, &Data, &Size);
memset(&ss_data, 0x0, sizeof(screen_struct));
ss_data.size = size;
// Send image size to server
FileServer.Connect(Server->GetAddress(), 30003);
// Send entire image
FileServer.Send((char *)&ss_data, sizeof(screen_struct));
FileServer.SendAll((char *)Data, Size);
FileServer.Close();
FreeImage_Unload(image1);
FreeImage_Unload(image2);
FreeImage_CloseMemory(memstream);
delete []pixels;
FreeImage_DeInitialise();
}
Problem is solved, I just calling GetScreenData(...) before SwapBuffers(...) now it works correctly but there is still a weird thing, on some computers I'v got shifted screens, for example: Screen #1 Don't know why it happens, for sure it happens on Nvidia 5xxx(m) i 7xxx(m) series so far as I know.
Big thanks for #AndonM.Coleman

Grab Mac OS Screen using GL_RGB format

I'm using the glgrab code to try and grab a full-screen screenshot of the Mac screen. However, I want the bitmap data to be in the GL_RGB format. That is, each pixel should be in the format:
0x00RRGGBB
The original code specified the GL_BGRA format. However, changing that to GL_RGB gives me a completely blank result. The total source code I'm using is:
CGImageRef grabViaOpenGL(CGDirectDisplayID display, CGRect srcRect)
{
CGContextRef bitmap;
CGImageRef image;
void * data;
long bytewidth;
GLint width, height;
long bytes;
CGColorSpaceRef cSpace = CGColorSpaceCreateWithName (kCGColorSpaceGenericRGB);
CGLContextObj glContextObj;
CGLPixelFormatObj pixelFormatObj ;
GLint numPixelFormats ;
//CGLPixelFormatAttribute
int attribs[] =
{
// kCGLPFAClosestPolicy,
kCGLPFAFullScreen,
kCGLPFADisplayMask,
NULL, /* Display mask bit goes here */
kCGLPFAColorSize, 24,
kCGLPFAAlphaSize, 0,
kCGLPFADepthSize, 32,
kCGLPFASupersample,
NULL
} ;
if ( display == kCGNullDirectDisplay )
display = CGMainDisplayID();
attribs[2] = CGDisplayIDToOpenGLDisplayMask(display);
/* Build a full-screen GL context */
CGLChoosePixelFormat( (CGLPixelFormatAttribute*) attribs, &pixelFormatObj, &numPixelFormats );
if ( pixelFormatObj == NULL ) // No full screen context support
{
// GL didn't find any suitable pixel formats. Try again without the supersample bit:
attribs[10] = NULL;
CGLChoosePixelFormat( (CGLPixelFormatAttribute*) attribs, &pixelFormatObj, &numPixelFormats );
if (pixelFormatObj == NULL)
{
qDebug("Unable to find an openGL pixel format that meets constraints");
return NULL;
}
}
CGLCreateContext( pixelFormatObj, NULL, &glContextObj ) ;
CGLDestroyPixelFormat( pixelFormatObj ) ;
if ( glContextObj == NULL )
{
qDebug("Unable to create OpenGL context");
return NULL;
}
CGLSetCurrentContext( glContextObj ) ;
CGLSetFullScreen( glContextObj ) ;
glReadBuffer(GL_FRONT);
width = srcRect.size.width;
height = srcRect.size.height;
bytewidth = width * 4; // Assume 4 bytes/pixel for now
bytewidth = (bytewidth + 3) & ~3; // Align to 4 bytes
bytes = bytewidth * height; // width * height
/* Build bitmap context */
data = malloc(height * bytewidth);
if ( data == NULL )
{
CGLSetCurrentContext( NULL );
CGLClearDrawable( glContextObj ); // disassociate from full screen
CGLDestroyContext( glContextObj ); // and destroy the context
qDebug("OpenGL drawable clear failed");
return NULL;
}
bitmap = CGBitmapContextCreate(data, width, height, 8, bytewidth,
cSpace, kCGImageAlphaNoneSkipFirst /* XRGB */);
CFRelease(cSpace);
/* Read framebuffer into our bitmap */
glFinish(); /* Finish all OpenGL commands */
glPixelStorei(GL_PACK_ALIGNMENT, 4); /* Force 4-byte alignment */
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glPixelStorei(GL_PACK_SKIP_ROWS, 0);
glPixelStorei(GL_PACK_SKIP_PIXELS, 0);
/*
* Fetch the data in XRGB format, matching the bitmap context.
*/
glReadPixels((GLint)srcRect.origin.x, (GLint)srcRect.origin.y, width, height,
GL_RGB,
#ifdef __BIG_ENDIAN__
GL_UNSIGNED_INT_8_8_8_8_REV, // for PPC
#else
GL_UNSIGNED_INT_8_8_8_8, // for Intel! http://lists.apple.com/archives/quartz-dev/2006/May/msg00100.html
#endif
data);
/*
* glReadPixels generates a quadrant I raster, with origin in the lower left
* This isn't a problem for signal processing routines such as compressors,
* as they can simply use a negative 'advance' to move between scanlines.
* CGImageRef and CGBitmapContext assume a quadrant III raster, though, so we need to
* invert it. Pixel reformatting can also be done here.
*/
swizzleBitmap(data, bytewidth, height);
/* Make an image out of our bitmap; does a cheap vm_copy of the bitmap */
image = CGBitmapContextCreateImage(bitmap);
/* Get rid of bitmap */
CFRelease(bitmap);
free(data);
/* Get rid of GL context */
CGLSetCurrentContext( NULL );
CGLClearDrawable( glContextObj ); // disassociate from full screen
CGLDestroyContext( glContextObj ); // and destroy the context
/* Returned image has a reference count of 1 */
return image;
}
I'm completely new to OpenGL, so I'd appreciate some pointers in the right direction. Cheers!
Update:
After some experimentation, I have managed to narrow my problem down. My problem is that while I don't want the alpha component, I Do want each pixel to be packed to 4-byte boundaries. Now, when I specify GL_RGB or GL_BGR formats to the glReadPixels call, I get the bitmap data packed in 3 byte blocks. When I specify GL_RGBA, or GL_BGRA, I get four byte blocks, but always with the alpha channel component last.
I then tried changing the value passed to
bitmap = CGBitmapContextCreate(data, width, height, 8, bytewidth,cSpace, kCGImageAlphaNoneSkipFirst /* XRGB */);
however, no variations of AlphaNoneSkipFirst or AlphaNoneSkipLast puts the alpha channel at the start of the pixel byte block.
Any ideas?
I'm not a Mac guy, but if you can get RGBA data and want XRGB, can't you just bitshift each pixel down eight bits?
foreach( unsigned int* RGBA_pixel, pixbuf )
{
(*RGBA_pixel) = (*RGBA_pixel) >> 8;
}
Try with GL_UNSIGNED_BYTE instead of GL_UNSIGNED_INT_8_8_8_8_REV / GL_UNSIGNED_INT_8_8_8_8.
Although it seems you want GL_RGBA instead -- then it should work with either 8_8_8_8_REV or 8_8_8_8 instead.
When I use GL_BGRA the data is returned pre-swizzled which is confirmed because the colors look correct when i display the result in a window.
Contact me if you want the project I created. Hope this helps.