DirectX 9 point sprites not scaling - c++

I got point sprites working almost immediately, but I'm only stuck on one thing, they are rendered as probably 2x2 pixel sprites, which is not really very easy to see, especially if there's other motion. Now, I've tried tweaking all the variables, here's the code that probably works best:
void renderParticles()
{
for(int i = 0; i < particleCount; i ++)
{
particlePoints[i] += particleSpeeds[i];
}
void* data;
pParticleBuffer->Lock(0, particleCount*sizeof(PARTICLE_VERTEX), &data, NULL);
memcpy(data, particlePoints, sizeof(particlePoints));
pParticleBuffer->Unlock();
pd3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_ZWRITEENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_POINTSPRITEENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_POINTSCALEENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_POINTSIZE, (DWORD)1.0f);
//pd3dDevice->SetRenderState(D3DRS_POINTSIZE_MAX, (DWORD)9999.0f);
//pd3dDevice->SetRenderState(D3DRS_POINTSIZE_MIN, (DWORD)0.0f);
pd3dDevice->SetRenderState(D3DRS_POINTSCALE_A, (DWORD)0.0f);
pd3dDevice->SetRenderState(D3DRS_POINTSCALE_B, (DWORD)0.0f);
pd3dDevice->SetRenderState(D3DRS_POINTSCALE_C, (DWORD)1.0f);
pd3dDevice->SetStreamSource(0, pParticleBuffer, 0, sizeof(D3DXVECTOR3));
pd3dDevice->DrawPrimitive(D3DPT_POINTLIST, 0, particleCount);
pd3dDevice->SetRenderState(D3DRS_POINTSPRITEENABLE, FALSE);
pd3dDevice->SetRenderState(D3DRS_POINTSCALEENABLE, FALSE);
}
Ok, so when I change POINTSCALE_A and POINTSCALE_B, nothing really changes much, same for C. POINTSIZE also makes no difference. When I try to assign something to POINTSIZE_MAX and _MIN, no matter what I assign, it always stops the rendering of the sprites. I also tried setting POINTSIZE with POINTSCALEENABLE set to false, no luck there either.
This looks like something not many people who looked around found an answer to. An explanation of the mechanism exists on MSDN, while, yes, I did check stackoverflow and found a similar question with no answer. Another source only suggested seting the max and min variables, which as I said, are pretty much making my particles disappear.
ParticlePoints and particleSpeeds are D3DXVector3 arrays, and I get what I expect from them. A book I follow suggested I define a custom vertex with XYZ and diffuse but I see no reason for this to be honest, it just adds a lot more to a long list of declarations.
Any help is welcome, thanks in advance.
Edit: Further tweaking showed than when any of the scale values are above 0.99999997f (at least between that and 0.99999998f I see the effect), I get the tiny version, if I put them there or lower I pretty much get the size of the texture - though that is still not really that good as it may be large, and it pretty much fails the task of being controllable.

Glad to help :) My comment as an answer:
One more problem that I've seen is you float to dword cast. The official documentation suggests the following conversion *((DWORD*)&Variable (doc) to be put into SetRenderState. I'm not very familiar with C++, but I would assume that this makes a difference, because your cast sets a real dword, but the API expects a float in the dwords memory space.

Related

Failure at displaying Bitmaps in wxWidgets

My application has two Pictures embedded in the Frame. My code is as follows:
wxMemoryInputStream istream1(Bild_png, sizeof Bild_png);
wxImage Bild_png(istream1, wxBITMAP_TYPE_PNG);
new wxStaticBitmap(p_img, wxID_ANY, wxBitmap(Bild_png));
vbox->Add(p_img ,0);
(vbox is the Sizer)
When I start the App, I've a "T-" at the left-upper corner in both Bitmaps. When I change the notebookitem("screen") and get back to the first Screen (where the Bitmaps are) the "-T" has disappeared...
How can I fixed it, so that I will never see the failure?
i had to call Layout() at the upmost sizer. That has solved my problem. It means at the end:
vbox->Layout()
#catalin, I don't think that to post aorund 2000 lines of sourcecode is a better way. I had choose this little snipped, because it says all what needed. A expert with wxWidgets had give me - with this four lines - the hint that something is fault with the sizer, not with the pic.
Haven't you been advised before to search the samples? For example widgets; just grep for wxStaticBitmap and I'm sure you'll find something useful.
This is just a poor way of asking a question.
In your c++ snippet you're using Bild_png even before it was declared - really? Then you mention both Bitmaps and notebookitem("screen") which are just unknown items to anyone else but you.
IMO it is just too... wrong to receive a good answer...

devIL ilLoad error 1285

I'm Having an issue with loading an image with devIL for openGL
in an earlier part of my project i call
ilInit();
in a function right after i call my load just like this
//generate a texture
ilGenImages( 1, &uiTextureHandle );
//bind our image
ilBindImage( uiTextureHandle );
//load
//ilLoad( IL_PNG, (const ILstring)"fake.png" );
ilLoad( IL_PNG, "fake.png" );
for the sake of error tracking i did place "ilGetError()" after every call
which returned 0 for all of these except for ilLoad which returns 1285
after some searching i figured out that this is a lack of memory error.
so ilLoad always returns 0 and not loaded.
anyone know what im doing incorrect as for my loading or if i forgot to do something
because i feel i might have forgotten something and thats the reason why 1285 appears.
A common reason for ilLoad() to fail with IL_OUT_OF_MEMORY is simply if the PNG file you're using is corrupt.
However, 1285 means IL_INVALID_VALUE - it means the path you're giving it is likely wrong. Try an absolute path (remembering that back slashes aren't okay in C++ unless you use double slashes).
I personally have used DevIL for quite some time and did like it. However, I urge you to consider FreeImage. It has a bit more development going on and is quite stable - I used it in a commercial engine for all my image needs, and it integrates decently well with DirectX/OpenGL much like DevIL.

Objective-C, CoreAudio: Possible reasons for which played sound has additional noise, hiss and pops?

I'm using CoreAudio to play some continuous sound. I managed to get it work, however I have a problem now that I can't overcome. The sound it's playing, more that that it's the actual sound I need, not just noise, but together with it I get noise, hiss, pops as well.
I verified the sample rate, zero-ed out all the silence buffers, checked the channels (I'm positive I only have 1) and double checked the algorithm that feeds the playback method.(but I'll add it here just to be sure). My experience with sound it's slim, so probably I'm doing something very terrible wrong. I would like to know if there are other things to check or what's the best approach on this, where to look first?
//init
playedBufferSize=audioFilesSize[audioFilesIndex];
startPointForPlayedBuffer=0;
//feed the audio
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioBuffer buffer = ioData->mBuffers[0];
if (playedBufferSize>=buffer.mDataByteSize) {
memcpy(buffer.mData , audioFiles[audioFilesIndex]+startPointForPlayedBuffer, buffer.mDataByteSize);
playedBufferSize-=buffer.mDataByteSize;
startPointForPlayedBuffer+=buffer.mDataByteSize;
}else {
memcpy(buffer.mData , audioFiles[audioFilesIndex]+startPointForPlayedBuffer, playedBufferSize);
nextAudioFileIndex();
memcpy(buffer.mData+playedBufferSize, audioFiles[audioFilesIndex], playedBufferSize);
playedBufferSize = audioFilesSize[audioFilesIndex]-(buffer.mDataByteSize-playedBufferSize);
startPointForPlayedBuffer = (buffer.mDataByteSize-playedBufferSize);
}
return noErr;
}
EDIT: I know that this code above won't play the sound continously because it fills the buffer with a bunch of 0's at some point, however, I get many strange sounds along with that, if the sound would play and stop for a short while and start again I would be happy, a good start :)
EDIT2: I edited the code so that it won't output silence anymore, still I get the hiss and pops unfortunately...
Thanks!
I'm not completely familiar with what you're doing but I had a similar issue using Core Graphics stuff on OSX - where I was getting visible "noise" on my images in certain situations. The issue there was with my buffers, I had to actually zero them out or else I would get noise on them. Can you try doing a memset on buffer.mData[] before using it?
The issue that came in to play, and why I think you may be seeing the same type of thing, is that when you allocate large chunks of memory in OSX it's typically zero'd for security reasons, but for small pieces of memory it won't be zero'd out. That can lead to strange bugs - i.e. at first you may be allocating a large enough piece of memory that it's cleared for you, but as you continue your streaming you may be allocating smaller pieces that aren't cleared.

WxTextCtrl unable to load large texts

I've read about the solutuon written here on a post a year ago
wx.TextCtrl.LoadFile()
Now I have a windows application that will generate color frequency statistics that are saved in 3D arrays. Here is a part of my code as you will see on the code below the printing of the statistics is dependent on a slider which specifies the threshold.
void Project1Frm::WxButton2Click(wxCommandEvent& event)
{
char stat[32] ="";
int ***report = pGLCanvas->GetPixel();
float max = pGLCanvas->GetMaxval();
float dist = WxSlider5->GetValue();
WxRichTextCtrl1->Clear();
WxRichTextCtrl1->SetMaxLength(100);
if(dist>0)
{
WxRichTextCtrl1->AppendText(wxT("Statistics\nR\tG\tB\t\n"));
for(int m=0; m<256; m++){
for(int n=0; n<256; n++){
for(int o=0; o<256; o++){
if((report[m][n][o]/max)>=(dist/100.0))
{
sprintf(stat,"%d\t%d\t%d\t%3.6f%%\n",m,n,o,report[m][n][o]/max*100.0);
WxRichTextCtrl1->AppendText(wxT(stat));
}
}
}
}
}
else if(dist==0) WxRichTextCtrl1->LoadFile("histodata.txt");
}
The solution I've tried so far is that when I am to print all the statistics I'll get it from a text file rather than going through the 3D array... I would like to ask if the Python implementation of the segmenting can be ported to C++ or are there better ways to deal with this problem. Thank you.
EDIT:
Another reason why I used a text file instead is that I observed that whenever I do sprintf only [with the line WxRichTextCtrl1->AppendText(wxT(stat)); was commented out] the computer starts to slow down.
-Ric
Disclaimer: My answer is more of an alternative than a solution.
I don't believe that there's any situation in which a user of this application is going to find it useful to have a scrolled text window containing ~16 million lines of numbers. It would be impossible to scroll to one specific location in the list that the user might need to see easily. This is all assuming that every single number you output here has some significance to the user of course (you are showing them on the screen for a reason). Providing the user with controls to look up specific, fixed (reasonable) ranges of those numbers would be a better solution, not only in regards to a better user experience, but also in helping to resolve your issue here.
On the other hand, if you still insist on one single window containing all 64 million numbers, you seem to have a very rigid data structure here, which means you can (and should) take advantage of using a virtual grid control (wxGrid), which is intended to work smoothly even with incredibly large data sets like this. The user will likely find this control easier to read and find the section of data they are looking for.

Passing D3DFMT_UNKNOWN into IDirect3DDevice9::CreateTexture()

I'm kind of wondering about this, if you create a texture in memory in DirectX with the CreateTexture function:
HRESULT CreateTexture(
UINT Width,
UINT Height,
UINT Levels,
DWORD Usage,
D3DFORMAT Format,
D3DPOOL Pool,
IDirect3DTexture9** ppTexture,
HANDLE* pSharedHandle
);
...and pass in D3DFMT_UNKNOWN format what is supposed to happen exactly? If I try to get the surface of the first or second level will it cause an error? Can it fail? Will the graphics device just choose a random format of its choosing? Could this cause problems between different graphics card models/brands?
I just tried it out and it does not fail, mostly
When Usage is set to D3DUSAGE_RENDERTARGET or D3DUSAGE_DYNAMIC, it consistently came out as D3DFMT_A8R8G8B8, no matter what I did to the back buffer format or other settings. I don't know if that has to do with my graphics card or not. My guess is that specifying unknown means, "pick for me", and that the 32-bit format is easiest for my card.
When the usage was D3DUSAGE_DEPTHSTENCIL, it failed consistently.
So my best conclusion is that specifying D3DFMT_UNKNOWN as the format gives DirectX the choice of what it should be. Or perhaps it always just defaults to D3DFMT_A8R8G8B.
Sadly, I can't confirm any of this in any documentation anywhere. :|
MSDN doesn't say. But I'm pretty sure you'd get "D3DERR_INVALIDCALL" as a result.
If the method succeeds, the return
value is D3D_OK. If the method fails,
the return value can be one of the
following: D3DERR_INVALIDCALL,
D3DERR_OUTOFVIDEOMEMORY,
E_OUTOFMEMORY.
I think this falls into the "undefined" category. Some drivers will fail the allocations, while others may default to something. I've never seen anything in the WDK that says that this condition needs to be handled. I'm guessing if you enable the debug DX runtime you will see an error message.