How to do .png image blur using DirectX9 without using advance 3d features HLSL / Per Pixel motion blut / Vertex Shader - c++

Can anyone please help me how to do .png image blur using DirectX9 (2d graphics feature of d3d) without using the advance 3d features like High Level Shader Language(HLSL), Direct3D Per Pixel Motion blur and Vertex Shader.
Detailed explanation:
I've a row of 6 adjacent .png images(6 Sprite textures) on a surface and then I continuously moving/changing the texture locations in a circular fashion (1->2->3->4->5->6->1->2->3->4->5->6) with varying alpha component for getting the feel of spinning effect (2 dimensional spinning).
Problem:
Even with varying sprite texture alpha values and varying frame per sec(fps) I didn't get the real feel of spinning apart from continuous image change. After so many web search I found the clue that it is possible to get the spinning effect if I apply the blur effect(Gaussian blur / box blur) in .png file on the fly.
But due to 3d support restriction in my target platform I can't use the advance 3d features of direct3d apart from 2d features for getting the spinning effect (kind of motion blur).
I warmly welcome the forum member for their kind help / suggestion / Sample code / directing me to right path for solving this problem.
Sample Code:
void D3DGraphics::DrawSprite(LPDIRECT3DTEXTURE9 &texture,ID3DXSprite* pSprite, DXVECTOR2 Trans_11, int ImgIndex) {
D3DXMATRIX Matrix;
pDevice->Clear( 0,NULL,D3DCLEAR_STENCIL,D3DCOLOR_XRGB(0,0,0),0.0f,0 );
pSprite->Begin(D3DXSPRITE_ALPHABLEND);
D3DXMatrixTransformation2D(&Matrix, NULL, NULL, NULL, NULL, 0, &Trans_11);
pSprite->SetTransform(&Matrix1);
--> Need blur effect before write operation / while loading .png file as texture <---
pSprite->Draw(texture[ImgIndex], NULL, NULL, NULL, 0xFFFFFFFF);
pSprite->End();
}

If cycling through 6 images isn't enough to provide smooth animation, why not cycle through say 12 or 24? If you only have 6 "original artworks" it'd still be easier to generate the intermediate images up front using photoshop/imagemagick type tools and just having your app load load them than trying to do it in DirextX.

Here is A Link to C++ Box Blur to one chinese's website blog
hope

Related

Why are my textures not rendered in greater detail in my DirectX11 game?

I am trying to write a small 3D game in C++, using DirectX 11. This is absolutely the first time I have attempted to write a game using only a graphics API. I have been following the tutorials on the website Rastertek.com up to Tutorial 9 for ambient lighting.
After implementing movement and collisions for the player, I increased the size of my play area. This is when I noticed my issue: the textures I am using for the walls and floor of my play area are not being rendered the way I expected them to.
Wall from close up
Wall from far away
Maybe you can tell how the lines on the wall appear strangely broken up - I was expecting them to be rendered properly at larger distances (like they are close up).
The thing that seems most weird to me, though, is that the lines can be rendered from far away, but only while moving the camera around the scene and only on certain parts of the wall. Standing still breaks the texture again. I tried capturing this effect on video, but I had no success getting it to show up in the video I took with the GeForce Experience.
I tried playing around with a bunch of the settings that DirectX offers, like the rasterizer or the depth buffer descriptions, I tried to enable and disable VSync, Antialiasing and Multisampling, I tried using Anisotropic filtering instead of linear filtering... But none of it had any effect.
I do not know where to look and what to try next. Am I just going to have to accept that my textures will look terrible at any sort of distance?
You need to generate mip maps for the texture you load. Check the DDSTextureLoader.h/cpp and WICTextureLoader.h/cpp here.
For example, to load the .dds image with mip maps, you would use:
HRESULT DirectX::CreateDDSTextureFromFileEx( ID3D11Device* d3dDevice,
ID3D11DeviceContext* d3dContext,
const wchar_t* fileName,
size_t maxsize,
D3D11_USAGE usage,
unsigned int bindFlags,
unsigned int cpuAccessFlags,
unsigned int miscFlags,
bool forceSRGB,
ID3D11Resource** texture,
ID3D11ShaderResourceView** textureView,
DDS_ALPHA_MODE* alphaMode )
Example of usage:
HRESULT hr = DirectX::CreateDDSTextureFromFileEx(device, context, path.c_str(), 0, D3D11_USAGE_DEFAULT, D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET, 0, D3D11_RESOURCE_MISC_GENERATE_MIPS, 0, reinterpret_cast<ID3D11Resource**>(&pTexture), &pSRV);
THROW_IF_FAILED(hr);
Note the flags D3D11_BIND_RENDER_TARGET and D3D11_RESOURCE_MISC_GENERATE_MIPS used.

Should I vertically flip the lines of an image loaded with stb_image to use in OpenGL?

I'm working on an OpenGL-powered 2d engine.
I'm using stb_image to load image data so I can create OpenGL textures. I know that the UV origin for OpenGL is bottom-left and I also intend to work in that space for my screen-space 2d vertices i.e. I'm using glm::ortho( 0, width, 0, height, -1, 1 ), not inverting 0 and height.
You probably guessed it, my texturing is vertically flipped but I'm 100% sure that my UV are specified correctly.
So: is this caused by stbi_load's storage of pixel data? I'm currently loading PNG files only so I don't know if it would cause this problem if I was using another file format. Would it? (I can't test right now, I'm not at home).
I really want to keep the screen coords in the "standard" OpenGL space... I know I could just invert the orthogonal projection to fix it but I would really rather not.
I can see two sane options:
1- If this is caused by stbi_load storage of pixel data, I could invert it at loading time. I'm a little worried about that for performance reason and because I'm using texture arrays (glTexture3d) for sprite animations meaning I would need to invert texture tiles individually which seems painful and not a general solution.
2- I could use a texture coordinate transformation to vertically flip the UVs on the GPU (in my GLSL shaders).
A possible 3rd option would be to use glPixelStore to specify the input data... but I can't find a way to tell it that the incoming pixels are vertically flipped.
What are your recommendations for handling my problem? I figured I can't be the only one using stbi_load + OpenGL and having that problem.
Finally, my target platforms are PC, Android and iOS :)
EDIT: I answered my own question... see below.
I know this question's pretty old, but it's one of the first results on google when trying to solve this problem, so I thought I'd offer an updated solution.
Sometime after this question was originally asked stb_image.h added a function called "stbi_set_flip_vertically_on_load", simply passing true to this function will cause it to output images the way OpenGL expects - thus removing the need for manual flipping/texture-coordinate flipping.
Also, for those who don't know where to get the latest version, for whatever reason, you can find it at github being actively worked on:
https://github.com/nothings/stb
It's also worth noting that in stb_image's current implementation they flip the image pixel-by-pixel, which isn't exactly performant. This may change at a later date as they've already flagged it for optimsation. Edit: It appears that they've swapped to memcpy, which should be a good bit faster.
Ok, I will answer my own question... I went thru the documentation for both libs (stb_image and OpenGL).
Here are the appropriate bits with reference:
glTexImage2D says the following about the data pointer parameter: "The first element corresponds to the lower left corner of the texture image. Subsequent elements progress left-to-right through the remaining texels in the lowest row of the texture image, and then in successively higher rows of the texture image. The final element corresponds to the upper right corner of the texture image." From http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml
The stb_image lib says this about the loaded image pixel: "The return value from an image loader is an 'unsigned char *' which points to the pixel data. The pixel data consists of *y scanlines of *x pixels, with each pixel consisting of N interleaved 8-bit components; the first pixel pointed to is top-left-most in the image." From http://nothings.org/stb_image.c‎
So, the issue is related the pixel storage difference between the image loading lib and OpenGL. It wouldn't matter if I loaded other file formats than PNG because stb_image returns the same data pointer for all formats it loads.
So I decided I'll just swap in place the pixel data returned by stb_image in my OglTextureFactory. This way, I keep my approach platform-independent. If load time becomes an issue down the road, I'll remove the flipping at load time and do something on the GPU instead.
Hope this helps someone else in the future.
Yes, you should. This can be easily accomplished by simply calling this STBI function before loading the image:
stbi_set_flip_vertically_on_load(true);
Since this is a matter of opposite assumptions between image libraries in general and OpenGL, Id say the best way is to manipulate the vertical UV-coord. This takes minimal effort and is always relevant when loading images using any image library and passing it to OpenGL.
Either feed tex-coords with 1.0f-uv.y in vertex-population OR reverse in shader.
fcol = texture2D( tex, vec2(uv.x,1.-uv.y) );

Trying to use OpenGL Texture Compression on a large bitmap - get white squares

I'm trying to use OpenGL's texture compression on a large image. My image is a world map that I'm painting on the screen as a series of 128x128 tiles as part of a learning exercise. I want the user to be able to pan and zoom around the image. It's a JPG that is rather large (20k by 10k pixels) and so I wanted each of my tiles (I tiled the image) to be compressed in order to lower the memory footprint of my program.
I picked an arbitrary texture compression format when I called glTexImage2D and each of my tiles become white squares. I dug a little deeper into this and figured "maybe my video card doesn't support all these formats." The video card is an Nvidia NVS 3100M on an IBM ThinkPad laptop and I did a glGetString to try to see what the supported texture compression formats were, but it didn't return anything (GL_COMPRESSED_TEXTURE_FORMATS). I also checked what GL_EXTENSIONS were supported and it returned "GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture" which doesn't look like much.
My program is in C# using the SharpGL library.
What other things can I check to see to try to figure this one out?
How about checking those texture minification filtering settings?

Cocos2D: How to use Mask Image

I am using cocos2d for a game which uses sprite sheets for my character animations. I created these images using TexturePacker. Now, I want to use PVRTC 4 format for reducing memory consumption due to some reasons. But as the PVRTC Texture Compression
Usage Guide suggests, I need to add extra border of 4 pixels in each character to produce proper results. Even if I add border, I will have to mask this image with alpha image to remove border at run time. I am using Texture Packer to create a sprite sheet with PVRTC4 format and created alpha masking image matching it. I am ready with these 2 images in hand which are of same width and height.
Now my question is, how can I mask my PVRTC texture with alpha image in Cocos2D?
It will be more helpful if the solution provided works with Batch Nodes!
Thanks in advance for any solutions!
Why don't you just make the border/padding area completely transparent?
I was having the same problem, and after reading ray wenderlichs page about masking, I made a little ccsprite subclass which allows you to mask by 2 images.
CCMaskedSprite

DirectX9 Texture of arbitrary size (non 2^n)

I'm relatively new to DirectX and have to work on an existing C++ DX9 application. The app does tracking on a camera images and displays some DirectDraw (ie. 2d) content. The camera has an aspect ratio of 4:3 (always) and the screen is undefined.
I want to load a texture and use this texture as a mask, so tracking and displaying of the content only are done within the masked area of the texture. Therefore I'd like to load a texture that has exactly the same size as the camera images.
I've done all steps to load the texture, but when I call GetDesc() the fields Width and Height of the D3DSURFACE_DESC struct are of the next bigger power-of-2 size. I do not care that the actual memory used for the texture is optimized for the graphics card but I did not find any way to get the dimensions of the original image file on the harddisk.
I do (and did, but with no success) search a possibility to load the image into the computers RAM only (graphicscard is not required) without adding a new dependency to the code. Otherwise I'd have to use OpenCV (which might anyway be a good idea when it comes to tracking), but at the moment I still try to avoid including OpenCV.
thanks for your hints,
Norbert
D3DXCreateTextureFromFileEx with parameters 3 and 4 being
D3DX_DEFAULT_NONPOW2.
After that, you can use
D3DSURFACE_DESC Desc;
m_Sprite->GetLevelDesc(0, &Desc);
to fetch the height & width.
D3DXGetImageInfoFromFile may be what you are looking for.
I'm assuming you are using D3DX because I don't think Direct3D automatically resizes any textures.