libpng: write a bigger png than 1002px - c++

I'm currently writing a c++ program which should write me a png file as output. So I made a little code, actually works. I just took the source code from here and condesed it. My code is nopasted here.
BUT: It only works if it doesn't exceed 1002 pixels in width. I am very sure the problem is somewhere around lines 29/30, so a malloc problem, but I don't get it.
Thanks for your help & greez

Without diving into the code too deeply, there are these interesting constants:
unsigned width = 1003;
unsigned height = 500;
int rowbytes = 4000;
The last one directly controls the amount of memory allocated. Have you tried increasing this value?

Related

XLib font size too small?

currently I'm using this font in my C++ program:
-misc-fixed-medium-r-normal--12-*-*-*-*-*-iso8859-15
where '12', the size, is also the font size I'm using currently with Linux Mint 18-1.
But when I draw in my program a string it is shown very small! It looks like it has a size of '6'!
Do I need to double the font size for my program, or something like that?
TIA
Regards
Earlybite
I was searching some hours the internet, also here, but I couldn't find an solution. Also in my "pre-version" of my program, I couldn't find the difference, because *there was a normal drawing with XLib and DrawString.
I also noticed, that even size = 40 hadn't a difference to e.g. size = 20. So there had to be a difference in coding.
So I went through the pre-version code line by line and at least I found that little line: XSetFont().
Which makes drawing strings normal.
E.g. like that:
XSetFont(mDisplay, vGC, this->mFontPtr.fid); // <-- HERE!
vGCVal.foreground = mXForeColorA->X_Color.pixel;
XChangeGC(mDisplay,vGC, GCForeground, &vGCVal);
XDrawString(mDisplay, vPix, vGC, x, y, nDrawString.c_str(), (int) nDrawString.length());

Malloc Error: OpenCV/C++ while push_back Vector

I try to create a Descriptor using FAST for the Point detection and SIFT for building the Descriptor. For that purpose I use OpenCV. While I use OpenCV's FAST I just use parts of the SIFT code, because I only need the Descriptor. Now I have a really nasty malloc Error and I don't know, how to solve it. I posted my code into GitHub because it is big and I dont really know where the Error comes from. I just know, that it is created at the end of the DO-WHILE-Loop:
features2d.push_back(features);
features.clear();
candidates2d.push_back(candidates);
candidates.clear();
}
}while(candidates.size() > 100);
As you can see in the code of GitHub I already tried to release Memory of the Application. Xcode Analysis says, that my Application uses 9 Mb memory. I tried to debug the Error but It was very complicated and I haven't found any clue where the Error comes from.
EDIT
I wondered if this Error could occur because I try to access the Image Pixel Value passed to calcOrientationHist(...) with img.at<sift_wt>(...) where typdef float sift_wt at Line 56, and 57 in my code, because normally the Patch I pass outputs the type 0 which means it is a CV_8UC1 But well, I copied this part from the sift.cpp at Line 330 and 331 Normally the SIFT Descriptor should also have a Grayscale image or not?
EDIT2
After changing the type in the img.at<sift_wt>(...)Position nothing changed. So I googled Solutions and landed at the GuardMalloc feature from XCode. Enabling it showed me a new Error which is probably the Reason I get the Malloc Error. In line 77 of my Code. The Error it gives me at this line is EXC_BAD_ACCESS (Code=1, address=....) There are the following lines:
for( k = 0; k < len; k ++){
int bin = cvRound((n/360.f)+Ori[k]);
if(bin >= n)
bin -=n;
if(bin < 0 )
bin +=n;
temphist[bin] += W[k]*Mag[k];
}
The Values of the mentioned Variables are the following:
bin = 52, len = 169, n = 36, k = 0, W, Mag, Ori and temphist are not shown.
Here the GuadMalloc Output (sorry but I dont really understand what exactly it wants)
GuardMalloc[Test-1935]: Allocations will be placed on 16 byte boundaries.
GuardMalloc[Test-1935]: - Some buffer overruns may not be noticed.
GuardMalloc[Test-1935]: - Applications using vector instructions (e.g., SSE) should work.
GuardMalloc[Test-1935]: version 108
Test(1935,0x102524000) malloc: protecting edges
Test(1935,0x102524000) malloc: enabling scribbling to detect mods to free blocks
Answer is simpler as thought...
The Problem was, that in the calculation of Bin in the For-loop the wrong value came out. Instead of adding ori[k] it should be a multiplication with ori[k].
The mistake there resulted in a bin value of 52. But the Length of the Array that temphist is pointing to is 38.
For all who have similar Errors I really recomment to use GuardMalloc or Valgrind to debug Malloc Errors.

DirectX 9 point sprites not scaling

I got point sprites working almost immediately, but I'm only stuck on one thing, they are rendered as probably 2x2 pixel sprites, which is not really very easy to see, especially if there's other motion. Now, I've tried tweaking all the variables, here's the code that probably works best:
void renderParticles()
{
for(int i = 0; i < particleCount; i ++)
{
particlePoints[i] += particleSpeeds[i];
}
void* data;
pParticleBuffer->Lock(0, particleCount*sizeof(PARTICLE_VERTEX), &data, NULL);
memcpy(data, particlePoints, sizeof(particlePoints));
pParticleBuffer->Unlock();
pd3dDevice->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_ZWRITEENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_POINTSPRITEENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_POINTSCALEENABLE, TRUE);
pd3dDevice->SetRenderState(D3DRS_POINTSIZE, (DWORD)1.0f);
//pd3dDevice->SetRenderState(D3DRS_POINTSIZE_MAX, (DWORD)9999.0f);
//pd3dDevice->SetRenderState(D3DRS_POINTSIZE_MIN, (DWORD)0.0f);
pd3dDevice->SetRenderState(D3DRS_POINTSCALE_A, (DWORD)0.0f);
pd3dDevice->SetRenderState(D3DRS_POINTSCALE_B, (DWORD)0.0f);
pd3dDevice->SetRenderState(D3DRS_POINTSCALE_C, (DWORD)1.0f);
pd3dDevice->SetStreamSource(0, pParticleBuffer, 0, sizeof(D3DXVECTOR3));
pd3dDevice->DrawPrimitive(D3DPT_POINTLIST, 0, particleCount);
pd3dDevice->SetRenderState(D3DRS_POINTSPRITEENABLE, FALSE);
pd3dDevice->SetRenderState(D3DRS_POINTSCALEENABLE, FALSE);
}
Ok, so when I change POINTSCALE_A and POINTSCALE_B, nothing really changes much, same for C. POINTSIZE also makes no difference. When I try to assign something to POINTSIZE_MAX and _MIN, no matter what I assign, it always stops the rendering of the sprites. I also tried setting POINTSIZE with POINTSCALEENABLE set to false, no luck there either.
This looks like something not many people who looked around found an answer to. An explanation of the mechanism exists on MSDN, while, yes, I did check stackoverflow and found a similar question with no answer. Another source only suggested seting the max and min variables, which as I said, are pretty much making my particles disappear.
ParticlePoints and particleSpeeds are D3DXVector3 arrays, and I get what I expect from them. A book I follow suggested I define a custom vertex with XYZ and diffuse but I see no reason for this to be honest, it just adds a lot more to a long list of declarations.
Any help is welcome, thanks in advance.
Edit: Further tweaking showed than when any of the scale values are above 0.99999997f (at least between that and 0.99999998f I see the effect), I get the tiny version, if I put them there or lower I pretty much get the size of the texture - though that is still not really that good as it may be large, and it pretty much fails the task of being controllable.
Glad to help :) My comment as an answer:
One more problem that I've seen is you float to dword cast. The official documentation suggests the following conversion *((DWORD*)&Variable (doc) to be put into SetRenderState. I'm not very familiar with C++, but I would assume that this makes a difference, because your cast sets a real dword, but the API expects a float in the dwords memory space.

Objective-C, CoreAudio: Possible reasons for which played sound has additional noise, hiss and pops?

I'm using CoreAudio to play some continuous sound. I managed to get it work, however I have a problem now that I can't overcome. The sound it's playing, more that that it's the actual sound I need, not just noise, but together with it I get noise, hiss, pops as well.
I verified the sample rate, zero-ed out all the silence buffers, checked the channels (I'm positive I only have 1) and double checked the algorithm that feeds the playback method.(but I'll add it here just to be sure). My experience with sound it's slim, so probably I'm doing something very terrible wrong. I would like to know if there are other things to check or what's the best approach on this, where to look first?
//init
playedBufferSize=audioFilesSize[audioFilesIndex];
startPointForPlayedBuffer=0;
//feed the audio
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioBuffer buffer = ioData->mBuffers[0];
if (playedBufferSize>=buffer.mDataByteSize) {
memcpy(buffer.mData , audioFiles[audioFilesIndex]+startPointForPlayedBuffer, buffer.mDataByteSize);
playedBufferSize-=buffer.mDataByteSize;
startPointForPlayedBuffer+=buffer.mDataByteSize;
}else {
memcpy(buffer.mData , audioFiles[audioFilesIndex]+startPointForPlayedBuffer, playedBufferSize);
nextAudioFileIndex();
memcpy(buffer.mData+playedBufferSize, audioFiles[audioFilesIndex], playedBufferSize);
playedBufferSize = audioFilesSize[audioFilesIndex]-(buffer.mDataByteSize-playedBufferSize);
startPointForPlayedBuffer = (buffer.mDataByteSize-playedBufferSize);
}
return noErr;
}
EDIT: I know that this code above won't play the sound continously because it fills the buffer with a bunch of 0's at some point, however, I get many strange sounds along with that, if the sound would play and stop for a short while and start again I would be happy, a good start :)
EDIT2: I edited the code so that it won't output silence anymore, still I get the hiss and pops unfortunately...
Thanks!
I'm not completely familiar with what you're doing but I had a similar issue using Core Graphics stuff on OSX - where I was getting visible "noise" on my images in certain situations. The issue there was with my buffers, I had to actually zero them out or else I would get noise on them. Can you try doing a memset on buffer.mData[] before using it?
The issue that came in to play, and why I think you may be seeing the same type of thing, is that when you allocate large chunks of memory in OSX it's typically zero'd for security reasons, but for small pieces of memory it won't be zero'd out. That can lead to strange bugs - i.e. at first you may be allocating a large enough piece of memory that it's cleared for you, but as you continue your streaming you may be allocating smaller pieces that aren't cleared.

WxTextCtrl unable to load large texts

I've read about the solutuon written here on a post a year ago
wx.TextCtrl.LoadFile()
Now I have a windows application that will generate color frequency statistics that are saved in 3D arrays. Here is a part of my code as you will see on the code below the printing of the statistics is dependent on a slider which specifies the threshold.
void Project1Frm::WxButton2Click(wxCommandEvent& event)
{
char stat[32] ="";
int ***report = pGLCanvas->GetPixel();
float max = pGLCanvas->GetMaxval();
float dist = WxSlider5->GetValue();
WxRichTextCtrl1->Clear();
WxRichTextCtrl1->SetMaxLength(100);
if(dist>0)
{
WxRichTextCtrl1->AppendText(wxT("Statistics\nR\tG\tB\t\n"));
for(int m=0; m<256; m++){
for(int n=0; n<256; n++){
for(int o=0; o<256; o++){
if((report[m][n][o]/max)>=(dist/100.0))
{
sprintf(stat,"%d\t%d\t%d\t%3.6f%%\n",m,n,o,report[m][n][o]/max*100.0);
WxRichTextCtrl1->AppendText(wxT(stat));
}
}
}
}
}
else if(dist==0) WxRichTextCtrl1->LoadFile("histodata.txt");
}
The solution I've tried so far is that when I am to print all the statistics I'll get it from a text file rather than going through the 3D array... I would like to ask if the Python implementation of the segmenting can be ported to C++ or are there better ways to deal with this problem. Thank you.
EDIT:
Another reason why I used a text file instead is that I observed that whenever I do sprintf only [with the line WxRichTextCtrl1->AppendText(wxT(stat)); was commented out] the computer starts to slow down.
-Ric
Disclaimer: My answer is more of an alternative than a solution.
I don't believe that there's any situation in which a user of this application is going to find it useful to have a scrolled text window containing ~16 million lines of numbers. It would be impossible to scroll to one specific location in the list that the user might need to see easily. This is all assuming that every single number you output here has some significance to the user of course (you are showing them on the screen for a reason). Providing the user with controls to look up specific, fixed (reasonable) ranges of those numbers would be a better solution, not only in regards to a better user experience, but also in helping to resolve your issue here.
On the other hand, if you still insist on one single window containing all 64 million numbers, you seem to have a very rigid data structure here, which means you can (and should) take advantage of using a virtual grid control (wxGrid), which is intended to work smoothly even with incredibly large data sets like this. The user will likely find this control easier to read and find the section of data they are looking for.