Passing D3DFMT_UNKNOWN into IDirect3DDevice9::CreateTexture() - c++

I'm kind of wondering about this, if you create a texture in memory in DirectX with the CreateTexture function:
HRESULT CreateTexture(
UINT Width,
UINT Height,
UINT Levels,
DWORD Usage,
D3DFORMAT Format,
D3DPOOL Pool,
IDirect3DTexture9** ppTexture,
HANDLE* pSharedHandle
);
...and pass in D3DFMT_UNKNOWN format what is supposed to happen exactly? If I try to get the surface of the first or second level will it cause an error? Can it fail? Will the graphics device just choose a random format of its choosing? Could this cause problems between different graphics card models/brands?

I just tried it out and it does not fail, mostly
When Usage is set to D3DUSAGE_RENDERTARGET or D3DUSAGE_DYNAMIC, it consistently came out as D3DFMT_A8R8G8B8, no matter what I did to the back buffer format or other settings. I don't know if that has to do with my graphics card or not. My guess is that specifying unknown means, "pick for me", and that the 32-bit format is easiest for my card.
When the usage was D3DUSAGE_DEPTHSTENCIL, it failed consistently.
So my best conclusion is that specifying D3DFMT_UNKNOWN as the format gives DirectX the choice of what it should be. Or perhaps it always just defaults to D3DFMT_A8R8G8B.
Sadly, I can't confirm any of this in any documentation anywhere. :|

MSDN doesn't say. But I'm pretty sure you'd get "D3DERR_INVALIDCALL" as a result.
If the method succeeds, the return
value is D3D_OK. If the method fails,
the return value can be one of the
following: D3DERR_INVALIDCALL,
D3DERR_OUTOFVIDEOMEMORY,
E_OUTOFMEMORY.

I think this falls into the "undefined" category. Some drivers will fail the allocations, while others may default to something. I've never seen anything in the WDK that says that this condition needs to be handled. I'm guessing if you enable the debug DX runtime you will see an error message.

Related

SystemParametersInfo(SPI_GETFONTSMOOTHINGTYPE) return 0

if (SystemParametersInfo(SPI_GETFONTSMOOTHINGTYPE, 0, &uiType, 0) != 0) {
Debug(uiType); // shows 0
}
This happened to me on Remote desktop with Windows Server 2012 R2.
According to the docs there are 2 possible values:
The possible values are FE_FONTSMOOTHINGSTANDARD (1) and
FE_FONTSMOOTHINGCLEARTYPE (2).
I also found this similar question but no answers:
Meaning of, SystemInformation.FontSmoothingType's return value
Does anyone knows what uiType 0 means?
EDIT: On that remote machine SPI_GETFONTSMOOTHING returns 0.
Determines whether the font smoothing feature is enabled.
The docs are obviously wrong. I would assume the correct way should be to first check the SPI_GETFONTSMOOTHING and only then SPI_GETFONTSMOOTHINGTYPE
The font smoothing "type" (SPI_GETFONTSMOOTHINGTYPE) is only meaningful if font smoothing is enabled (SPI_GETFONTSMOOTHING). The same is true for all of the other font smoothing attributes, like SPI_GETFONTSMOOTHINGCONTRAST and SPI_GETFONTSMOOTHINGORIENTATION.
You should check SPI_GETFONTSMOOTHING first. If it returns TRUE (non-zero), then you can query the other font smoothing attributes. If it returns FALSE (zero), then you are done. If you request the other font smoothing attributes, you will get meaningless noise.
So, in other words, your edit is correct, and the MSDN documentation could afford to be improved. I'm not sure it is "incorrect"; this seems like a pretty obvious design to me. It is a C API; calling it with the wrong parameters can be assumed to lead to wrong results.
The documentation does say that the only possible return values for SPI_GETFONTSMOOTHINGTYPE are FE_FONTSMOOTHINGSTANDARD and FE_FONTSMOOTHINGCLEARTYPE, so it would not be possible for this parameter to indicate that font smoothing is disabled or not applicable. The current implementation of SystemParametersInfo might return 0 for the case where font smoothing is disabled, but since the documentation doesn't explicitly say that you can rely on that, you shouldn't rely on it.

DirectX 9 point sprites not scaling

I got point sprites working almost immediately, but I'm only stuck on one thing, they are rendered as probably 2x2 pixel sprites, which is not really very easy to see, especially if there's other motion. Now, I've tried tweaking all the variables, here's the code that probably works best:
void renderParticles()
{
for(int i = 0; i < particleCount; i ++)
{
particlePoints[i] += particleSpeeds[i];
}
void* data;
pParticleBuffer->Lock(0, particleCount*sizeof(PARTICLE_VERTEX), &data, NULL);
memcpy(data, particlePoints, sizeof(particlePoints));
pParticleBuffer->Unlock();
pd3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_ZWRITEENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_POINTSPRITEENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_POINTSCALEENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_POINTSIZE, (DWORD)1.0f);
//pd3dDevice->SetRenderState(D3DRS_POINTSIZE_MAX, (DWORD)9999.0f);
//pd3dDevice->SetRenderState(D3DRS_POINTSIZE_MIN, (DWORD)0.0f);
pd3dDevice->SetRenderState(D3DRS_POINTSCALE_A, (DWORD)0.0f);
pd3dDevice->SetRenderState(D3DRS_POINTSCALE_B, (DWORD)0.0f);
pd3dDevice->SetRenderState(D3DRS_POINTSCALE_C, (DWORD)1.0f);
pd3dDevice->SetStreamSource(0, pParticleBuffer, 0, sizeof(D3DXVECTOR3));
pd3dDevice->DrawPrimitive(D3DPT_POINTLIST, 0, particleCount);
pd3dDevice->SetRenderState(D3DRS_POINTSPRITEENABLE, FALSE);
pd3dDevice->SetRenderState(D3DRS_POINTSCALEENABLE, FALSE);
}
Ok, so when I change POINTSCALE_A and POINTSCALE_B, nothing really changes much, same for C. POINTSIZE also makes no difference. When I try to assign something to POINTSIZE_MAX and _MIN, no matter what I assign, it always stops the rendering of the sprites. I also tried setting POINTSIZE with POINTSCALEENABLE set to false, no luck there either.
This looks like something not many people who looked around found an answer to. An explanation of the mechanism exists on MSDN, while, yes, I did check stackoverflow and found a similar question with no answer. Another source only suggested seting the max and min variables, which as I said, are pretty much making my particles disappear.
ParticlePoints and particleSpeeds are D3DXVector3 arrays, and I get what I expect from them. A book I follow suggested I define a custom vertex with XYZ and diffuse but I see no reason for this to be honest, it just adds a lot more to a long list of declarations.
Any help is welcome, thanks in advance.
Edit: Further tweaking showed than when any of the scale values are above 0.99999997f (at least between that and 0.99999998f I see the effect), I get the tiny version, if I put them there or lower I pretty much get the size of the texture - though that is still not really that good as it may be large, and it pretty much fails the task of being controllable.
Glad to help :) My comment as an answer:
One more problem that I've seen is you float to dword cast. The official documentation suggests the following conversion *((DWORD*)&Variable (doc) to be put into SetRenderState. I'm not very familiar with C++, but I would assume that this makes a difference, because your cast sets a real dword, but the API expects a float in the dwords memory space.

devIL ilLoad error 1285

I'm Having an issue with loading an image with devIL for openGL
in an earlier part of my project i call
ilInit();
in a function right after i call my load just like this
//generate a texture
ilGenImages( 1, &uiTextureHandle );
//bind our image
ilBindImage( uiTextureHandle );
//load
//ilLoad( IL_PNG, (const ILstring)"fake.png" );
ilLoad( IL_PNG, "fake.png" );
for the sake of error tracking i did place "ilGetError()" after every call
which returned 0 for all of these except for ilLoad which returns 1285
after some searching i figured out that this is a lack of memory error.
so ilLoad always returns 0 and not loaded.
anyone know what im doing incorrect as for my loading or if i forgot to do something
because i feel i might have forgotten something and thats the reason why 1285 appears.
A common reason for ilLoad() to fail with IL_OUT_OF_MEMORY is simply if the PNG file you're using is corrupt.
However, 1285 means IL_INVALID_VALUE - it means the path you're giving it is likely wrong. Try an absolute path (remembering that back slashes aren't okay in C++ unless you use double slashes).
I personally have used DevIL for quite some time and did like it. However, I urge you to consider FreeImage. It has a bit more development going on and is quite stable - I used it in a commercial engine for all my image needs, and it integrates decently well with DirectX/OpenGL much like DevIL.

DirectX Crash When Resizing Tiny

I am trying to make my program more bullet proof. My program resizes fine until I make it super tiny like this:
A method to prevent that from happening is to set a minimum size, which I know how to do already. I want to look deeper into the problem before I do that.
The following is where the functions start to crash.
hr=swapChain->ResizeBuffers(settings.bufferCount, settings.width, settings.height, DXGI_FORMAT_UNKNOWN, 0);
if(FAILED(hr)) return 0;
I figured it was because the buffer was too small, so I made a fail safe buffer size. It also failed though.
hr=swapChain->ResizeBuffers(settings.bufferCount, fallback.width, fallback.height, DXGI_FORMAT_UNKNOWN, 0);
if(FAILED(hr)) return 0;
What is the reason the program chokes when I make it tiny? I thought it was the buffers being too small. Doesnt seem like it is the case.
Edit:
Been a while since I posted this, so my code has changed a lot. Now it gives an unhandled exception crash when calling deviceContext->ClearRenderTargetView().

Memory error using OpenGL "glTexImage2D"

I've been following this tutorial on OpenGL and C++:
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=06
...and I've found myself facing quite the error. Whenever I try to compile, my program crashes with an error of the type, System.AccessViolationException. I've isolated the problem to be in this function:
glTexImage2D(GL_TEXTURE_2D, 0, 3, TextureImage[0]->sizeX, TextureImage[0]->sizeY, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage[0]->data);
In case you don't want to look through that tutorial, the memory appears to be set up like so:
AUX_RGBImageRec *TextureImage[1];
memset(TextureImage,0,sizeof(void *)*1);
Any help would be awesome. Thanks.
You're crashing because TextureImage[0] is NULL. The initial memset there sets it to NULL; if you follow along in the tutorial, the next line of code is this:
if (TextureImage[0]=LoadBMP("Data/NeHe.bmp"))
Note carefully that there is a single = sign here, not a double == as you'd normally see (you may even get a compiler warning here; to suppress that, add extra parentheses around the assignment)). Make sure you copied this line of code correctly and that you have a single = here.
If in fact you do have a single =, then check to make sure that LoadBMP is returning a non-NULL value. If it's returning NULL, the most likely cause is that it can't find the bitmap file Data/NeHe.bmp, either because it doesn't exist or it's looking for it in the wrong directory. Make sure your current working directory is set up correctly so that it can find the image.
Turns out the bitmap I was trying to load was too large. I shrunk it to 256x256px and it worked perfectly.