What's alternative to D3DXCreateTextureFromFileInMemory and D3DXCreateTextureFromFileEx in d3d11? Simply how can I load image to texture buffer (looks like it ID3D10Texture2D data type) to be able to render it?
That's kind of a broad question, but hopefully I can help point you in the right direction.
At the highest level, "loading" a texture involves the following steps:
Place image data in some form into memory (either loading from a file, generating algorithmically, etc).
Convert the image data into the raw form required by the texture. This will be predicated on the format texture you require. For example, most color (albedo) textures will be in the DXGI_FORMAT_R8G8B8A8_UNORM_SRGB format. This step may involve decompression of a source image file (e.g. if it's JPEG or PNG), and possibly some form of conversation if the formats take different data types, etc.
(Optional) generate the mip chain for the texture. Generally, having a full mip chain is a good idea for visual and performance reasons.
Copy the raw pixel data into the texture. Format conversion could be done during this step (it really depends on the implementation).
For the conversion part, there are plenty of libraries that will load and convert image files to raw pixel data. One such is the Windows Imaging Component library (WIC). There are others out there, too - a google search will yield lots of results.
For MIP generation, you can do this yourself, or some of the third party imaging libraries will do this for you. D3DX will also generate mips. Another option is to have D3D generate them for you (not ideal, but can work as a stop-gap) via the ID3D11DeviceContext::GenerateMips call.
Top copy raw pixel data into the texture, assuming it's static (unchanging, or "immutable") data, you should create your texture like so:
D3D11_TEXTURE2D_DESC tdesc;
// ...
// Fill out width, height, mip levels, format, etc...
// ...
tdesc.Usage = D3D11_USAGE_IMMUTABLE;
tdesc.BindFlags = D3D11_BIND_SHADER_RESOURCE; // Add D3D11_BIND_RENDER_TARGET if you want to go
// with the auto-generate mips route.
tdesc.CPUAccessFlags = 0;
tdesc.MiscFlags = 0; // or D3D11_RESOURCE_MISC_GENERATE_MIPS for auto-mip gen.
D3D11_SUBRESOURCE_DATA srd; // (or an array of these if you have more than one mip level)
srd.pSysMem = pointer_to_raw_pixel_data; // This data should be in raw pixel format
srd.SysMemPitch = width_of_row_in_bytes; // Sometimes pixel rows may be padded so this might not be as simple as width * pixel_size_in_bytes.
srd.SysMemSlicePitch = 0;
ID3D11Texture2D * texture;
pDevice->CreateTexture2D(&tdesc, &srd, &texture);
This will create the texture and populate it with your pixel data in one go. You can also create the texture with the D3D11_USAGE_DEFAULT usage flag, and use the ID3D11DeviceContext::Map/Unmap calls to do this after creation (this is useful if you'll be changing the texture content occasionally).
This is a kinda rough overview of the basics - there's a ton of stuff out there on the web going into the dirty details of how all this stuff works and best practices etc. I think the best thing I can recommend is find some sample code and experiment with it.
Related
Goal: compensate and visualize a stream of 14-bit data (2D video).
Existing solution: Each sample needs to be compensated for a gain and offset, so it requires one multiplication and one addition. Then I assign a colour to the sample by a look-up table and output a stream of "colours" directly to the display. Everything is done on CPU.
Requirements: I need to be able to dynamically set a look-up table (palette).
It seems obvious to use GPU for such an operation, but I couldn't find any info about how to move from data domain to picture domain with OpenGL. I've thought about using OpenCL for data compensation and image generation and then moving to OpenGL for displaying (or in general: for manipulating picture).
Can you recommend me a good approach for this? Can this all be efficiently achieved just with the OpenGL? How?
Yes, it can be done using only OpenGL.
I would suggest a workflow like the following:
For each frame:
Upload frame from stream to texture memory
Draw a full-screen quad, with texture coordinates from 0,0 to 1,1
In a fragment shader apply for each pixel the appropriate transformation. The lookup table can also be stored in a texture, so you only have to perform a lookup on the appropriate location.
In general: This question is at the moment a little bit too broad to be answered in more detail. For example a stream of 14-bit data could be a lot of things. I assumed for this answer you meant a (2D) video stream.
I've heard that you need power of two texture dimensions for it to work in OpenGL. However, I've been able to load textures which are 200x200 and 300x300 (not powers of 2). Meanwhile when I tried to load a texture that is 512x512 (powers of two) with the same code but the data won't load (by the way I am using DevIL to load these pngs). I have not been able to find any thing that will tell me what type of dimensions will load. I also know that you can clip the textures and add borders but I don't know what the resulting dimensions should be.
Here is the load function:
void tex::load(std::string file)
{
ILuint img_id = 0;
ilGenImages(1,&img_id);
ilBindImage(img_id);
ilLoadImage(file.c_str());
ilConvertImage(IL_RGBA,IL_UNSIGNED_BYTE);
pix_data = (GLuint*)ilGetData();
tex_width = (GLuint)ilGetInteger(IL_IMAGE_WIDTH);
tex_height = (GLuint)ilGetInteger(IL_IMAGE_HEIGHT);
ilDeleteImages(1,&img_id);
//create
glGenTextures(1,&tex_id);
glBindTexture(GL_TEXTURE_2D,tex_id);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,tex_width,tex_height,0,GL_RGBA,GL_UNSIGNED_BYTE,pix_data);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glBindTexture(GL_TEXTURE_2D,NULL);
}
There are some sources that do say the maximum or at least how you can figure it out, your first step should be the OpenGL specification but for that it would be nice to know which OpenGL you are targeting. OpenGl as far as I know have a minimal maximum texture size hardcoded which is 64x64, for the actual maximum the implementation is responsible to tell you by GL_MAX_TEXTURE_SIZE you can use that in with glGet* functions this will tell you the maximum power of two texture that the implementation can handle.
On top of this OpenGL itself never mention non-power of two textures, unless it is a core feature in newer opengl versions or it is an extension.
If you want to know what combinations are actually supported again refer to the appropriate specification and it will let you know how to obtain that info.
I'm working on an OpenGL-powered 2d engine.
I'm using stb_image to load image data so I can create OpenGL textures. I know that the UV origin for OpenGL is bottom-left and I also intend to work in that space for my screen-space 2d vertices i.e. I'm using glm::ortho( 0, width, 0, height, -1, 1 ), not inverting 0 and height.
You probably guessed it, my texturing is vertically flipped but I'm 100% sure that my UV are specified correctly.
So: is this caused by stbi_load's storage of pixel data? I'm currently loading PNG files only so I don't know if it would cause this problem if I was using another file format. Would it? (I can't test right now, I'm not at home).
I really want to keep the screen coords in the "standard" OpenGL space... I know I could just invert the orthogonal projection to fix it but I would really rather not.
I can see two sane options:
1- If this is caused by stbi_load storage of pixel data, I could invert it at loading time. I'm a little worried about that for performance reason and because I'm using texture arrays (glTexture3d) for sprite animations meaning I would need to invert texture tiles individually which seems painful and not a general solution.
2- I could use a texture coordinate transformation to vertically flip the UVs on the GPU (in my GLSL shaders).
A possible 3rd option would be to use glPixelStore to specify the input data... but I can't find a way to tell it that the incoming pixels are vertically flipped.
What are your recommendations for handling my problem? I figured I can't be the only one using stbi_load + OpenGL and having that problem.
Finally, my target platforms are PC, Android and iOS :)
EDIT: I answered my own question... see below.
I know this question's pretty old, but it's one of the first results on google when trying to solve this problem, so I thought I'd offer an updated solution.
Sometime after this question was originally asked stb_image.h added a function called "stbi_set_flip_vertically_on_load", simply passing true to this function will cause it to output images the way OpenGL expects - thus removing the need for manual flipping/texture-coordinate flipping.
Also, for those who don't know where to get the latest version, for whatever reason, you can find it at github being actively worked on:
https://github.com/nothings/stb
It's also worth noting that in stb_image's current implementation they flip the image pixel-by-pixel, which isn't exactly performant. This may change at a later date as they've already flagged it for optimsation. Edit: It appears that they've swapped to memcpy, which should be a good bit faster.
Ok, I will answer my own question... I went thru the documentation for both libs (stb_image and OpenGL).
Here are the appropriate bits with reference:
glTexImage2D says the following about the data pointer parameter: "The first element corresponds to the lower left corner of the texture image. Subsequent elements progress left-to-right through the remaining texels in the lowest row of the texture image, and then in successively higher rows of the texture image. The final element corresponds to the upper right corner of the texture image." From http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml
The stb_image lib says this about the loaded image pixel: "The return value from an image loader is an 'unsigned char *' which points to the pixel data. The pixel data consists of *y scanlines of *x pixels, with each pixel consisting of N interleaved 8-bit components; the first pixel pointed to is top-left-most in the image." From http://nothings.org/stb_image.c
So, the issue is related the pixel storage difference between the image loading lib and OpenGL. It wouldn't matter if I loaded other file formats than PNG because stb_image returns the same data pointer for all formats it loads.
So I decided I'll just swap in place the pixel data returned by stb_image in my OglTextureFactory. This way, I keep my approach platform-independent. If load time becomes an issue down the road, I'll remove the flipping at load time and do something on the GPU instead.
Hope this helps someone else in the future.
Yes, you should. This can be easily accomplished by simply calling this STBI function before loading the image:
stbi_set_flip_vertically_on_load(true);
Since this is a matter of opposite assumptions between image libraries in general and OpenGL, Id say the best way is to manipulate the vertical UV-coord. This takes minimal effort and is always relevant when loading images using any image library and passing it to OpenGL.
Either feed tex-coords with 1.0f-uv.y in vertex-population OR reverse in shader.
fcol = texture2D( tex, vec2(uv.x,1.-uv.y) );
Does anyone know of an efficient way to push 2vuy non-planar data onto a GPU in a way that doesn't require swizzling?
I am grabbing the raw 2vuy data from an h264 video file and successfully loading it into a texture that I map to an an OpenGL object. I notice that my code spends a fair amount of time in glgProcessPixelsWithProcessor. My glTexImage2D call looks like the following:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_YCBCR_422_APPLE,
GL_UNSIGNED_SHORT_8_8_APPLE, data);
Apple says in its OpenGL guide that GL_YCBCR_422_APPLE, provides "acceptable" performance (p103), but that
Note: If your data needs only to be swizzled, glgProcessPixels performs the swizzling reasonably fast although not as fast as if the data didn't need swizzling. But non-native data formats are converted one byte at a time and incurs a performance cost that is best to avoid.
I assume that there is some kind of internal format conversion going on the CPU. I noticed in another thread that glgProcessPixels is running a block method as well.
Is my path the most efficient? If not, what is?
Your code, as it stands right now depends on extensions of Apple. I can't tell what's happening inside.
However what I suggest is, that you create three 2D textures, each with exactly one channel, where each texture receives one of the color planes; using independent textures makes supporting chroma subsampling (that 422) simpler.
In a shader you'd then perform the colorspace conversion. When writing down the math I suggest you do this via a contact color space, like XYZ, as this allows you, to take the color profile of the output device into account; ICC profiles provide the conversion data from XYZ color space coordinates to device color space (RGB) coordinates.
I'm relatively new to DirectX and have to work on an existing C++ DX9 application. The app does tracking on a camera images and displays some DirectDraw (ie. 2d) content. The camera has an aspect ratio of 4:3 (always) and the screen is undefined.
I want to load a texture and use this texture as a mask, so tracking and displaying of the content only are done within the masked area of the texture. Therefore I'd like to load a texture that has exactly the same size as the camera images.
I've done all steps to load the texture, but when I call GetDesc() the fields Width and Height of the D3DSURFACE_DESC struct are of the next bigger power-of-2 size. I do not care that the actual memory used for the texture is optimized for the graphics card but I did not find any way to get the dimensions of the original image file on the harddisk.
I do (and did, but with no success) search a possibility to load the image into the computers RAM only (graphicscard is not required) without adding a new dependency to the code. Otherwise I'd have to use OpenCV (which might anyway be a good idea when it comes to tracking), but at the moment I still try to avoid including OpenCV.
thanks for your hints,
Norbert
D3DXCreateTextureFromFileEx with parameters 3 and 4 being
D3DX_DEFAULT_NONPOW2.
After that, you can use
D3DSURFACE_DESC Desc;
m_Sprite->GetLevelDesc(0, &Desc);
to fetch the height & width.
D3DXGetImageInfoFromFile may be what you are looking for.
I'm assuming you are using D3DX because I don't think Direct3D automatically resizes any textures.