How to render Background image not of power of two - c++

I'm new to DirectX, and I am trying to render a Texture that is of dimension 300 x 570 pixels.
What is the correct way to accomplish this?
I am using Windows 8, and feature level 11_0, and I have access to a function that loads textures from *.DDS files.
However, it is my understanding that DDS textures have dimensions that are of a power of two, and my background texture does not, therefore I cannot convert my background image (currently .jpg) to a .dds format.
How might someone render this background texture efficiently?

Related

Detect single channel texture in pixel shader

Is it possible to detect when a format has a single channel in HLSL or GLSL? Or just as good, is it possible to extract a greyscale color from such a texture without knowing if it has a single channel or 4?
When sampling from texture formats such as DXGI_FORMAT_R8_*/GL_R8 or DXGI_FORMAT_BC4_UNORM, I am getting pure red RGBA values (g,0,0,1). This would not be a problem if I knew (within the shader) that the texture only had the single channel, as I could then flood the other channels with that red value. But doing anything of this nature would break the logic for color textures, requiring a separate compiled version for the grey sampling (for every texture slot).
Is it not possible to make efficient use of grey textures in modern shaders without specializing the shader for them?
The only solution I can come up with at the moment would be to detect the grey texture on the CPU side and generate a macro on the GPU side that selects a different compiled version of the shader for every texture slot. Doing this with 8 texture slots would add up to 8x8=64 compiled versions every shader that wants to support grey inputs. That's not counting the other macro-like switches that actually make sense being there.
Just to be clear, I do know that I can load these textures into GPU memory as 4-channel greyscale textures, and go from there. But doing that uses 4X the memory, and I would rather load in 3 more textures.
In OpenGL there's two ways to achieve what you're looking for:
Legacy: The INTENSITY and LUMINANCE texture formats will when sampled result in vec4(I,I,I,I) or vec4(L,L,L,1).
Modern: Use a swizzle mask to apply user defined channel swizzling per texture: glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, {GL_RED,GL_RED,GL_RED,GL_ONE});
In DirectX 12 you can use component mapping during the creation of a ShaderResourceView.

OpenGL textures, SDL_TTF fonts and the power of two dimentions

With my current project I've begun to translate all rendering from SDL to OpenGL. This means that I have to translate a SDL_Surface (a loaded img) to a OpenGL texture.
When I do this I understand that it's important for the dimensions to be of the power of two. But when I create a font I can't always have it of the power of two. In a tutorial which described how to use SDL_TTF with OpenGL it made sure to transform the image to the right dimensions if it weren't, but this only distorts my image.
If i don't mess with the dimensions everything works fine. Why do i need the power of two in dimensions? And if i really need it, how do i apply it without distorting my image?

Pixel manipulation in OpenGL

Lets say I have this image and in it is an object (a cube). That object is being tracked (with labels) and I manage to render a virtual cube onto it (augmented reality). Now that I can render a virtual cube onto it I want to be able to make the object 'disappear' with some really basic diminished-reality technique called "inpainting". The inpaint in question is pretty simple (it has to be or the FPS will suffer) and it requires me to do some operations on pixels and their neighbors (like with Gaussian blur or other basic image processing).
To do that I first need:
A mask: black background with a white cube in it.
Access each pixel of the initial image (at coordinates x and y) as well as its neighborhood and do stuff based on the pixel value of the mask at the same x and y coordinates. So basically the mask serves as a way to say ignore this pixel or use this pixel.
How do I do this using OpenGL? I want to be able to access pixel values 1 by 1 preferably in 2D because of the neighbors.
Do I use FBOs or PBOs? I've read many things about buffers and methods like glDrawPixels() but I'm having trouble putting them all together. The paper I saw this method in used the GL_BACK buffer but mine is already used. Some sample code (C++) would be really appreciated with all the formalities (OpenG` calls) since I'm still a beginner in OpenGL.
I'm even thinking of using OpenCV if pixel manipulation is too hard in OpenGL since my AR library (Aruco) works on top of OpenCV. In that case I will still need to get the mask (white cube on black background), convert it to a cv::Mat and then do my processing.
I know this approach is inefficient (going back and forth from the GPU/CPU) but my goal (for now) is to at least make the basics work.
Setup a framebuffer object to render your original image + virtual cube. Here's a tutorial.
Next you can attach that framebuffer texture as a input (sampler) texture of your next stage and render a quad (two triangles) that cover your mask.
In the fragment shader you should be able to sample your "screen coordinate" by reading the variable gl_FragCoord. Setting up the texture filter functions as GL_NEAREST, you can access the exact texture coordinates. Also the neighboring pixels are available with a displacement (deltaX = 2/Width, deltaY=2/Height).
Using a previous framebuffer texture as source is mandatory, as the currently active framebuffer is write only.

Difference between glBitmap and glTexImage2D

I need to display image in openGL window.
Image changes every timer tick.
I've checked on google how, and as I can see it can be done using or glBitmap or glTexImage2D functions.
What is the difference between them?
The difference? These two functions have nothing in common.
glBitmap is a function for drawing binary images. That's not a .BMP file or an image you load (usually). The function's name doesn't refer to the colloquial term "bitmap". It refers to exact that: a map of bits. Each bit in the bitmap represents a pixel. If the bit is 1, then the current raster color will be written to the framebuffer. If the bit is 0, then the pixel in the framebuffer will not be altered.
glTexImage2D is for allocating textures and optionally uploading pixel data to them. You can later draw triangles that have that texture mapped to them. But glTexImage2D by itself does not draw anything.
What you are probably looking for is glDrawPixels, which draws an image directly into the framebuffer. If you use glTexImage2D, you have to first update the texture with the new image, then draw a shape with that texture (say, a fullscreen quad) to actually render the image.
That said, you'll be better off with glTexImage2D if...
You're using a library like JOGL that makes binding textures from images an easy operation, or
You want to scale the image or display it in perspective

OpenGL textured image is blurry

I am trying to create a mipmapped textured image that represents elevation. The image must be 940 x 618. I realize that my texture must have a width and height of a power of 2. As of now I have tried to incrementally go through doing all my texturing in squares (eg 64 x 64, or 128 x 128, even 512 x 512), but the image still comes out blurry. Any idea of how to better texture an image of this size?
Use a 1024x1024 texture and put your image in just a part of the image, 940x618. Then use the values 940.0/1024.0 and 618.0/1024.0 for the max texture coordinates, or scale the TEXTURE_MATRIX. This will make a 1:1 mapping for your pixels. You might also need to shift the model half a pixel to get a perfect fit, this depends on your model setup and view.
This is the technic I used in this screensaver I made for the Mac. http://memention.com/void/ It grabs the screen contents and uses it as a texture on some 3D effects and I really wanted a pixel perfect fit.
As far as I know, modern technologies do not require you to use a power of 2 for your dimensions. Just know however, that if this code is run an older machine, you'll have some problems. How old is your machine?
The texture is probably not mapped 1:1, and you have GL_LINEAR or GL_NEAREST filtering. Try higher resolution texture, mipmapping, and 1:1 screen mapping.
Use a 940x618 sized texture (if this is truly the size of the surface it's applied to) and set the texture's minification/magnification to use GL_LINEAR. That should give you the results you're after.