Creating DirectDraw Surface from scratch in c++ - c++

I'm trying to convert a 2d array to a DDS and saving it to a file. Array is full of Color structs (each having a red, green, blue and alpha component). Once I get the array to the correct format, I'm sure the saving it to file part won't be a problem.
I'm fine with either using a lib for this (as long as its license allows me to use it in a closed source project and works on both Linux and Windows) or doing it manually, if I can find a nice resource explaining how to do it.
If anyone can point me in the right direction, I'd really appreciate it.

In DirectDraw you can create a surface from the data in memory, by setting up certain fields in the DDSURFACEDESC structure and passing it to the CreateSurface method of the IDirectDraw interface.
First you need to tell DirectDraw which fields of the DDSURFACEDESC structure contain the correct information by setting the dwFlags field to the following set of flags: DDSD_WIDTH | DDSD_HEIGHT | DDSD_PIXELFORMAT | DDSD_LPSURFACE | DDSD_PITCH.
Oh, and this only works for system-memory surfaces, so it's probably needed to add the DDSCAPS_SYSTEMMEMORY flag in the ddsCaps.dwCaps field (if DirectDraw won't do it by default).
Then you specify the address of the beginning of your pixel data array in the lpSurface field. If your buffer is continuous, just set the lPitch to 0. Else you set the correct pitch there (the distance in bytes between the beginnings of two subsequent scanlines).
Set the correct pixel format in ddpfPixelFormat field, with correct bit depth in dwRGBBitCount and RGB masks in dwRBitMask, dwGBitMask and dwBBitMask.
Then set the lXPitch to the number of bytes your pixel has (3 for RGB). It depends on the pixel format you use.
Then pass the filled structure into CreateSurface and see if it works.
When you create the surface this way, keep in mind that DirectDraw will not manage its data buffer himself, and won't free this memory once you call Release on your surface. You need to free this memory yourself when it's no longer used by the surface.
If you want this pixel data to be placed in video memory, on the other hand, you need to create an offscreen surface in a usual way and then lock it, copy your pixels to its own buffer in video memory (you'll find its address in the lpSurface field, and remember to take lPitch in count!), and then unlock it.

Related

OpenGL Reading Pixels from Texture?

I need a way to get the pixels of an already existing texture. Similarly to how D3DTexture's LockRect works with ReadOnly and NoSysLock. Some of my textures are also stored in compressed DXT1/3/5 formats, not entirely sure if that would affect anything. If those formats are simply decoded by Opengl and stored as raw pixels instead of in the compression. So would retrieving the pixels guarantee the same format that was used to set the texture with?
Generally you will want to use a PBO for reading pixels. Here's all the information you need on PBOs, click here
So would retrieving the pixels guarantee the same format that was used
to set the texture with?
It is possible to convert the format and retrieve the pixels at the same time. Look at the format conversion section on the page I linked.

Should I vertically flip the lines of an image loaded with stb_image to use in OpenGL?

I'm working on an OpenGL-powered 2d engine.
I'm using stb_image to load image data so I can create OpenGL textures. I know that the UV origin for OpenGL is bottom-left and I also intend to work in that space for my screen-space 2d vertices i.e. I'm using glm::ortho( 0, width, 0, height, -1, 1 ), not inverting 0 and height.
You probably guessed it, my texturing is vertically flipped but I'm 100% sure that my UV are specified correctly.
So: is this caused by stbi_load's storage of pixel data? I'm currently loading PNG files only so I don't know if it would cause this problem if I was using another file format. Would it? (I can't test right now, I'm not at home).
I really want to keep the screen coords in the "standard" OpenGL space... I know I could just invert the orthogonal projection to fix it but I would really rather not.
I can see two sane options:
1- If this is caused by stbi_load storage of pixel data, I could invert it at loading time. I'm a little worried about that for performance reason and because I'm using texture arrays (glTexture3d) for sprite animations meaning I would need to invert texture tiles individually which seems painful and not a general solution.
2- I could use a texture coordinate transformation to vertically flip the UVs on the GPU (in my GLSL shaders).
A possible 3rd option would be to use glPixelStore to specify the input data... but I can't find a way to tell it that the incoming pixels are vertically flipped.
What are your recommendations for handling my problem? I figured I can't be the only one using stbi_load + OpenGL and having that problem.
Finally, my target platforms are PC, Android and iOS :)
EDIT: I answered my own question... see below.
I know this question's pretty old, but it's one of the first results on google when trying to solve this problem, so I thought I'd offer an updated solution.
Sometime after this question was originally asked stb_image.h added a function called "stbi_set_flip_vertically_on_load", simply passing true to this function will cause it to output images the way OpenGL expects - thus removing the need for manual flipping/texture-coordinate flipping.
Also, for those who don't know where to get the latest version, for whatever reason, you can find it at github being actively worked on:
https://github.com/nothings/stb
It's also worth noting that in stb_image's current implementation they flip the image pixel-by-pixel, which isn't exactly performant. This may change at a later date as they've already flagged it for optimsation. Edit: It appears that they've swapped to memcpy, which should be a good bit faster.
Ok, I will answer my own question... I went thru the documentation for both libs (stb_image and OpenGL).
Here are the appropriate bits with reference:
glTexImage2D says the following about the data pointer parameter: "The first element corresponds to the lower left corner of the texture image. Subsequent elements progress left-to-right through the remaining texels in the lowest row of the texture image, and then in successively higher rows of the texture image. The final element corresponds to the upper right corner of the texture image." From http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml
The stb_image lib says this about the loaded image pixel: "The return value from an image loader is an 'unsigned char *' which points to the pixel data. The pixel data consists of *y scanlines of *x pixels, with each pixel consisting of N interleaved 8-bit components; the first pixel pointed to is top-left-most in the image." From http://nothings.org/stb_image.c‎
So, the issue is related the pixel storage difference between the image loading lib and OpenGL. It wouldn't matter if I loaded other file formats than PNG because stb_image returns the same data pointer for all formats it loads.
So I decided I'll just swap in place the pixel data returned by stb_image in my OglTextureFactory. This way, I keep my approach platform-independent. If load time becomes an issue down the road, I'll remove the flipping at load time and do something on the GPU instead.
Hope this helps someone else in the future.
Yes, you should. This can be easily accomplished by simply calling this STBI function before loading the image:
stbi_set_flip_vertically_on_load(true);
Since this is a matter of opposite assumptions between image libraries in general and OpenGL, Id say the best way is to manipulate the vertical UV-coord. This takes minimal effort and is always relevant when loading images using any image library and passing it to OpenGL.
Either feed tex-coords with 1.0f-uv.y in vertex-population OR reverse in shader.
fcol = texture2D( tex, vec2(uv.x,1.-uv.y) );

Read Framebuffer-texture like an 1D array

I am doing some gpgpu calculations with GL and want to read my results from the framebuffer.
My framebuffer-texture is logically an 1D array, but I made it 2D to have a bigger area. Now I want to read from any arbitrary pixel in the framebuffer-texture with any given length.
That means all calculations are already done on GPU side and I only need to pass certain data to the cpu that could be aligned over the border of the texture.
Is this possible? If yes is it slower/faster than glReadPixels on the whole image and then cutting out what I need?
EDIT
Of course I know about OpenCL/CUDA but they are not desired because I want my program to run out of the box on (almost) any platform.
Also I know that glReadPixels is very slow and one reason might be that it offers some functionality that I do not need (Operating in 2D). Therefore I asked for a more basic function that might be faster.
Reading the whole framebuffer with glReadPixels just to discard it all except for a few pixels/lines would be grossly inefficient. But glReadPixels lets you specify a rect within the framebuffer, so why not just restrict it to fetching the few rows of interest ? So you maybe end up fetching some extra data at the start and end of the first and last lines fetched, but I suspect the overhead of that is minimal compared with making multiple calls.
Possibly writing your data to the framebuffer in tiles and/or using Morton order might help structure it so a tighter bounding box can be be found and the extra data retrieved minimised.
You can use a pixel buffer object (PBO) to transfer pixel data from the framebuffer to the PBO, then use glMapBufferARB to read the data directly:
http://www.songho.ca/opengl/gl_pbo.html

BMP image generated But displayed inverted

I have generated bitmap.dll through winddk.
Added manually as a printer driver selecting print-to-file driver.
Using this I create an image of my document using print command from file.
I am able to create image and view it, But the problem is that I get inverted(mirror) image.
cScans = pOemPDEV->bmInfoHeader.biHeight;
// Flip the biHeight member so that it denotes top-down bitmap
pOemPDEV->bmInfoHeader.biHeight = cScans * -1;
Have anyone workaround of this code? As I get the problem when I comment(to get header properly generated) this lines.
Device Independent Bitmaps are documented as being laid out in memory with the bottom line at the start of the buffer. Its an experiment in cartesian co-ordinates perpetrated by the designers of OS/2 who were working with Microsoft at the same time Windows 3 was being developed.
There are two possible fixes:
Generate your buffer upside down.
Many Windows APIs that take a BITMAPINFO treat a negative biHeight value to mean a top down DIB.

Draw array of bits(rgb) in windows

I have an array of raw rgb data.
I would like to know how can I draw this pixels on the screen in Windows OS?
Now I use API function DrawDIBits, but I must turn up my image data.
I always use SetDiBitsToDevice, but drawDIBits could be okay as well (haven't checked).
As for the upside-down nature of the windows blit functions:
There is a workaround. If you pass a BITMAPINFOHEADER or BITMAPINFO structure to the function just negate the value in the bitmap-height member. This will tell GDI to do the blit as if the height would be positive, but interpret the data as beeing stored in a top-down order.
You may get a nice speed improvement by this "hack" as well.
If you want to shuffle the byte-order of the pixels (e.g. turn ARGB into BGRA or so) you can use the BITMAPV4HEADER structure and tell GDI how your pixel-data is organized. That's a functionality that is rarely used but works since WIN98. I'd say it's save to use it these days..
If you mean drawing it without reversing the (R,G,B) into (B,G,R), I don't know an automatic way to do that.
If you mean drawing it without padding each line to a multiple of 4 pixels, you can do it by drawing each line one at a time. It will be slow, though.