DirectX9 Texture of arbitrary size (non 2^n) - c++

I'm relatively new to DirectX and have to work on an existing C++ DX9 application. The app does tracking on a camera images and displays some DirectDraw (ie. 2d) content. The camera has an aspect ratio of 4:3 (always) and the screen is undefined.
I want to load a texture and use this texture as a mask, so tracking and displaying of the content only are done within the masked area of the texture. Therefore I'd like to load a texture that has exactly the same size as the camera images.
I've done all steps to load the texture, but when I call GetDesc() the fields Width and Height of the D3DSURFACE_DESC struct are of the next bigger power-of-2 size. I do not care that the actual memory used for the texture is optimized for the graphics card but I did not find any way to get the dimensions of the original image file on the harddisk.
I do (and did, but with no success) search a possibility to load the image into the computers RAM only (graphicscard is not required) without adding a new dependency to the code. Otherwise I'd have to use OpenCV (which might anyway be a good idea when it comes to tracking), but at the moment I still try to avoid including OpenCV.
thanks for your hints,
Norbert

D3DXCreateTextureFromFileEx with parameters 3 and 4 being
D3DX_DEFAULT_NONPOW2.
After that, you can use
D3DSURFACE_DESC Desc;
m_Sprite->GetLevelDesc(0, &Desc);
to fetch the height & width.

D3DXGetImageInfoFromFile may be what you are looking for.
I'm assuming you are using D3DX because I don't think Direct3D automatically resizes any textures.

Related

Should I vertically flip the lines of an image loaded with stb_image to use in OpenGL?

I'm working on an OpenGL-powered 2d engine.
I'm using stb_image to load image data so I can create OpenGL textures. I know that the UV origin for OpenGL is bottom-left and I also intend to work in that space for my screen-space 2d vertices i.e. I'm using glm::ortho( 0, width, 0, height, -1, 1 ), not inverting 0 and height.
You probably guessed it, my texturing is vertically flipped but I'm 100% sure that my UV are specified correctly.
So: is this caused by stbi_load's storage of pixel data? I'm currently loading PNG files only so I don't know if it would cause this problem if I was using another file format. Would it? (I can't test right now, I'm not at home).
I really want to keep the screen coords in the "standard" OpenGL space... I know I could just invert the orthogonal projection to fix it but I would really rather not.
I can see two sane options:
1- If this is caused by stbi_load storage of pixel data, I could invert it at loading time. I'm a little worried about that for performance reason and because I'm using texture arrays (glTexture3d) for sprite animations meaning I would need to invert texture tiles individually which seems painful and not a general solution.
2- I could use a texture coordinate transformation to vertically flip the UVs on the GPU (in my GLSL shaders).
A possible 3rd option would be to use glPixelStore to specify the input data... but I can't find a way to tell it that the incoming pixels are vertically flipped.
What are your recommendations for handling my problem? I figured I can't be the only one using stbi_load + OpenGL and having that problem.
Finally, my target platforms are PC, Android and iOS :)
EDIT: I answered my own question... see below.
I know this question's pretty old, but it's one of the first results on google when trying to solve this problem, so I thought I'd offer an updated solution.
Sometime after this question was originally asked stb_image.h added a function called "stbi_set_flip_vertically_on_load", simply passing true to this function will cause it to output images the way OpenGL expects - thus removing the need for manual flipping/texture-coordinate flipping.
Also, for those who don't know where to get the latest version, for whatever reason, you can find it at github being actively worked on:
https://github.com/nothings/stb
It's also worth noting that in stb_image's current implementation they flip the image pixel-by-pixel, which isn't exactly performant. This may change at a later date as they've already flagged it for optimsation. Edit: It appears that they've swapped to memcpy, which should be a good bit faster.
Ok, I will answer my own question... I went thru the documentation for both libs (stb_image and OpenGL).
Here are the appropriate bits with reference:
glTexImage2D says the following about the data pointer parameter: "The first element corresponds to the lower left corner of the texture image. Subsequent elements progress left-to-right through the remaining texels in the lowest row of the texture image, and then in successively higher rows of the texture image. The final element corresponds to the upper right corner of the texture image." From http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml
The stb_image lib says this about the loaded image pixel: "The return value from an image loader is an 'unsigned char *' which points to the pixel data. The pixel data consists of *y scanlines of *x pixels, with each pixel consisting of N interleaved 8-bit components; the first pixel pointed to is top-left-most in the image." From http://nothings.org/stb_image.c‎
So, the issue is related the pixel storage difference between the image loading lib and OpenGL. It wouldn't matter if I loaded other file formats than PNG because stb_image returns the same data pointer for all formats it loads.
So I decided I'll just swap in place the pixel data returned by stb_image in my OglTextureFactory. This way, I keep my approach platform-independent. If load time becomes an issue down the road, I'll remove the flipping at load time and do something on the GPU instead.
Hope this helps someone else in the future.
Yes, you should. This can be easily accomplished by simply calling this STBI function before loading the image:
stbi_set_flip_vertically_on_load(true);
Since this is a matter of opposite assumptions between image libraries in general and OpenGL, Id say the best way is to manipulate the vertical UV-coord. This takes minimal effort and is always relevant when loading images using any image library and passing it to OpenGL.
Either feed tex-coords with 1.0f-uv.y in vertex-population OR reverse in shader.
fcol = texture2D( tex, vec2(uv.x,1.-uv.y) );

Real time drawing in GDI

I'm currently writing a 3D renderer (for fun and research), so I need a way to draw my framebuffer to a window. Since I'm doing all of my calculations on CPU, the drawing needs to be as fast as possible.
One of my goals is to use no existing graphics library (OpenGL/DirectX) so the drawing to the screen is pure Win32. In my research I've found a couple of ways to create and draw bitmaps and now I'm looking for the best one.
My current implementation uses a bitmap created with CreateDIBSection(), which is drawn to my window DC using BitBlt().
CreateDIBSection() give me a pointer to my bitmap bytes so I can manipulate it without copying. Using this method I achieve an update rate of about 260 FPS (without any rendering done).
This seems a bit slow, so I'm looking for optimizations.
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
How can I make sure my DIB bitmap and window are compatible?
Are there methods of drawing an bitmap which are faster than my current implementation?
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
I've read something about that if you don't create a bitmap with the same palette as the system palette, some slow color conversions are done.
Very few systems run in a palette mode any more, so it seems unlikely this is an issue for you.
Aside from palettes, some GDI functions also cause a color matching conversion to be applied if the source bitmap and the destination have different gamuts. BitBlt, however, does not do this type of color matching, so you're not paying a price for that.
How can I make sure my DIB bitmap and window are compatible?
You don't. You can use DIBs (which are Device-Independent Bitmaps) or compatible (device-dependent) bitmaps. It's possible that your DIB bitmap matches the current mode of your device. For example, if you're using a 32 bpp DIB, and your display is in that same mode, then no conversion is necessary. If you want a bitmap that's guaranteed to be in the same mode as your device, then you can't use a DIB and all the nice properties it provides for predictable pixel layout and format.
Are there methods of drawing an bitmap which are faster than my current implementation?
The limitation is most likely in getting the data from system memory to graphics adapter memory. To get around that limitation, you need a faster graphics bus, or you need to render directly into graphic memory, which means you'd need to do your computation on the GPU rather than the CPU.
If you're rendering a 1920 x 1080 pixel image at 24 bits per pixel, that's close to 6 MB for your frame buffer. That's an awful lot of data. If you're doing that 260 times per second, that's actually pretty impressive.
I've also read something about DrawDibDraw(), can anyone confirm that this is faster?
It's conceivable, but the only way to know would be to measure it. And the results might vary from machine to machine because of differences in the graphics adapter (and which bus they use).

Trying to use OpenGL Texture Compression on a large bitmap - get white squares

I'm trying to use OpenGL's texture compression on a large image. My image is a world map that I'm painting on the screen as a series of 128x128 tiles as part of a learning exercise. I want the user to be able to pan and zoom around the image. It's a JPG that is rather large (20k by 10k pixels) and so I wanted each of my tiles (I tiled the image) to be compressed in order to lower the memory footprint of my program.
I picked an arbitrary texture compression format when I called glTexImage2D and each of my tiles become white squares. I dug a little deeper into this and figured "maybe my video card doesn't support all these formats." The video card is an Nvidia NVS 3100M on an IBM ThinkPad laptop and I did a glGetString to try to see what the supported texture compression formats were, but it didn't return anything (GL_COMPRESSED_TEXTURE_FORMATS). I also checked what GL_EXTENSIONS were supported and it returned "GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture" which doesn't look like much.
My program is in C# using the SharpGL library.
What other things can I check to see to try to figure this one out?
How about checking those texture minification filtering settings?

Fastest way of plotting a point on screen in MFC C++ app

I have an application that contains many millions of 3d rgb points that form an image when plotted. What is the fastest way of getting them to screen in a MFC application? I've tried CDC.SetPixelV in conjunction with a bitmap, which seems quite slow, and am looking towards a Direct3D or OpenGL window in my MFC view class. Any other good places to look?
Double buffering is your solution. There are many examples on codeproject. Check this one for example
Sounds like a point cloud. You might find some good information searching on that term.
3D hardware is the fastest way to take 3D points and get them into a 2D display, so either Direct3D or OpenGL seem like the obvious choices.
If the number of points is much greater than the number of pixels in your display, then you'll probably first want to cull points that are trivially outside the view. You put all your points in some sort of spatial partitioning structure (like an octree) and omit the points inside any node that's completely outside the viewing frustrum. This reduces the amount of data you have to push from system memory to GPU memory, which will likely be the bottleneck. (If your point cloud is static, and you're just building a fly through, and if your GPU has enough memory, you could skip the culling, send all the data at once, and then just update the transforms for each frame.)
If you don't want to use the GPU and instead write a software renderer, you'll want to render to a bitmap that's in the same pixel format as your display (to eliminate the chance of the blit need to do any pixels formatting as it blasts the bitmap to the display). For reasonable window sizes, blitting at 30 frames per second is feasible, but it might not leave much time for the CPU to do the rendering.

C++ Image Manipulation

So I have made this program for a game and need help with making it a bit more automatic.
The program takes in an image and then displays it. I'm doing this over textures in OpenGL. When I take the screenshot of the game it is usually something about 700x400. I input the height and width into my program, resize the image to 1024x1024 (making it a POT texture for better compatibility) by adding blank space (the original image stays at the top left corner and goes all the way to (700,400) and the rest is just blank; does anyone know the term for this?) and then load it into my program and adjust the corners so only the part from (0,0) to (700,400) is shown.
That's how I handle the display of the image. Now, I would like to make this automatic. So I'd take a 700x400 picture, pass it to the program which would get the image's width and height (700x400), resize it to 1024x1024 by adding blank space and then load it.
So does anyone know a C++ library capable of doing this? I would still be taking the screenshot manually though.
I am using the Simple OpenGL Image Library (SOIL) for loading the picture (.bmp) and converting it into a texture.
Thanks!
You don't really have to resize by adding blank space to display image properly. In fact, it's really unefficient way to do it, especially because you store images in .bmp format.
SOIL is able to automatically add the blank space when loading textures - maybe just try to load the file as-is, without doing any operations.
From SOIL Documentation:
Can automatically rescale the image to the next largest power-of-two
size
Can load rectangluar textures for GUI elements or splash screens
(requires GL_ARB/EXT/NV_texture_rectangle)
Anyway, you don't have to use texture to display pixels on the screen. I presume you aren't using shaders for rendering - if it all goes through fixed pipeline, there's glDrawPixels function, which will be much simpler. Just remember to change your SOIL call to SOIL_load_image.