I am calling SDL_LoadBMP("duck.bmp") in a loop ten thousand times.
After about the thousandth time, the call fails and SDL_GetError() reports:
"Couldn't open duck.bmp"
I can't figure out why this is -- is there anything I can do to get more information?
It sounds like perhaps it may be a memory issue, but there is plenty of system RAM free when this occurs.
Note: the BMP is 32x32.
Even if you have plenty of free system RAM, could still run out of address space; you generally only get 2GB to work with in a 32-bit application. Although with an image that tiny, it ought to take way more than 1000 times to use up that much memory. Are you doing anything else memory-hungry in your loop?
Most importantly, is there a reason you want to re-load the image file 10,000 times? If you're looking for multiple copies of the image to manipulate, I'd recommend making copies of the original surface with SDL_ConvertSurface instead of going back to the file each time. If this method fails as well, it's possible that SDL_GetError will give you a more meaningful error message when it does.
If you are also writing data back to that file, make sure you're properly closing it, or you might be running into a permissions sort of issue. I'm pretty sure that Windows won't allow you to open a file for reading that is already open for writing. (This seems less likely since you're only hitting the problem after a thousand iterations of your loop, but it's worth checking.)
When you're done with the image, you should call SDL_FreeSurface (see http://wiki.libsdl.org/SDL_FreeSurface). Otherwise, well, the memory is not freed.
As Raptor007 points out, loading an image 1000 times is, ahem, not recommended. I assumed you were doing this to see if there was a memory leak. If not... stop doing it. Once is enough.
Related
I am a beginner C++ programmer.
I wrote a simple program that creates a char array (the size is user's choice) and reads what previous information was in it. Often you can find something that makes sense but most of it is just strange characters. I made it output into a binary file.
Why do I often find multiple copies of the alphabet?
Is it possible to find a picture inside of the RAM chunk I retrieved?
I heard about file signatures (headers), which goes before any of the data in a file, but do "trailers" go in the back after all the data?
When you read uninitialized data from memory that you allocated, you'll never see any data from another process. You only ever see data that your own process has written. That is: your code plus all the libraries that you called.
This is a security feature of your kernel: It never leaks information from a process unless it's specifically asked to transfer that information.
If you didn't load a picture in memory, you'll never see one using this method.
Assumning your computer runs Linux, Windows, MacOS or something like that, there will NEVER be any pictures in the memory your process uses - unless you loaded them into your process. For security reasons, the memory used by other processes is cleared before it gets given to YOUR process. This is the case for all modern OS's, and has been the case for multi-user OS's (Unix, VAX-VMS, etc) more or less since they were first invented in the late 1950's or early 1960's - because someone figured out that it's kind of unfun when "your" data is found by someone else who is just out there fishing for it.
Even a process that has ended will have it's memory cleared - how would you like it if your password was still stored in memory for someone to find when the program that reads the password ended? [Programs that hold highly sensitive data, such as encryption keys or passwords, often manually (as in using code, but not waiting until the OS clears it when the process ends) clear the memory used to store such, because of the below debug functionally allowing the memory content to be inspected at any time, and the shorter time, the less likely a leak of sensitive information]
Once memory has been allocated to your process, and freed again, it will contain whatever happens to be in that memory, as clearing it takes extra time, and most of the time, you'd want to fill it with something else anyway. So it contains whatever it happens to contain, and if you poke around it, you will potentially "find stuff". But it's all your own processes work.
Most OS's have a way to read what another process is doing as part of the debug functionality (if you run the "debugger" in your system, it will of course run as a separate process, but needs to be able to access your program when you debug it, so there needs to be ways to read the memory of that process), but that requires a little more effort than just calling new or malloc (and you either will need to have extra permissions (superuser, adminstrator, etc), or be the owner of the other process too).
Of course, if your computer is running DOS or CP/M, it has no such security features, and you get whatever happens to be in the memory (and you could also just make up a pointer to an arbitrary address and read it, as long as you stay within the memory range of the system).
I wrote a cpp program which reads several csv files and appends the data in a map >. The problem is that this data structure goes over 80% of my memory usage at some point, and then kswapd0 appears and takes my program CPU usage to belo 10%, which makes it extremely slow, given the nature of the program.
I do understand the nature of kswapd0, I know it wants to play around with disk and memory pagination, however I do still need my program to run!
Does anybody have a clue on how to overcome it?
The problem is not my program, I can ensure you that, because I separated the program in steps [basically grouping the files] and for some groups it doesn't happen, only for the real huge large groups take go over 85% of memory usage evoking kswapd0...
After certain memory usage kswapd0 starts. You can change the limits or disable it.
edit after comment:
you can check swapiness value with this code
cat /proc/sys/vm/swappiness
Mine was 60. It means after %60 of my memory use swap is kicking in. You can change this with a higher value or with "0" for disabling it.
sudo sysctl vm.swappiness=90
I don't know your hardware but I'm using aws ec2 t instances. They don't have any space for swapping. So I've disabled it and my problem solved. Hope it helps.
If I need to read from a file very often, and I will load the file into a vector of unsigned char using fread, the consequent fread are really fast, even if the vector of unsigned char is destroy right after reading.
It seems to me that something (Windows or the disk) caches the file and thus freads are very fast. I have not read anything about this behaviour, so I am unsure what really causes this.
If I don't use my application for 1 hour or so and then do an fread again, the fread is slow.
It seems to me that the cache got emptied.
Can somebody explain this behaviour to me? I would like to actively use it.
It is a problem for me when the freads are slow.
Memory-mapping the file works theoretically, but the file itself is too big, so I can not use it.
90/10 law
90% of the execution time of a computer program is spent executing 10% of the code
It is not a rule but usually it is so, so lots of programs tries to keep recent data if possible because it is very likely that that data will be accessed very soon again.
Windows OS is not an exception, after receiving command to read file OS keeps some data about file. It stores in memory addresses of ages where the program is stored, if possible even store some part (or even all) of binary data in memory, it makes next file read much faster if that read is just after the first-one.
All-in-all you are right that there is caching, but I can't to say, that is really going on as I'm not working in Microsoft...
Also answering into next part of question. File mapping into memory may be solution but if the file is very large machine may not have stat much memory so it wouldn't be an option. However, you can use the 90/10 law. In your case you should have just a part of file mapped into memory (that part that is the most important), also while reading you should make a data table of overall parameters.
Don't know exact situation, but it may save.
I would like to ask if anybody sees a bottle bottleneck in my code or any way to optimize it.
I am thinking about if my code has a fault somewhere or if I need to choose a completely new approach.
I have memory-mapped a file, and I need to read doubles from this memory-mapped file.
I need to do this around 100.000 times as fast as possible.
I was expecting that it would be quite fast in Release mode, but that is not the case.
The first time I do it, it takes over 5 seconds. The next time it takes around 200 ms. This is a bit faster (I guess it has to do with the way Windows handles a memory-mapped file), but it is still too slow.
void clsMapping::FeedJoinFeaturesFromMap(vector<double> &uJoinFeatures,int uHPIndex)
{
int iBytePos=this->Content()[uHPIndex];
int iByteCount=16*sizeof(double);
uJoinFeatures.resize(16);
memcpy(&uJoinFeatures[0], &((char*)(m_pVoiceData))[iBytePos],iByteCount);
}
Does anybody see a way to improve my code? I hardcoded the iByteCountCount, but that did not really change anything.
Thank you for your ideas.
You're reading 12.5MB of data from the file. That's not so much, but it's still not trivial.
The difference between your first and second run is probably due to file caching - the second time you want to read the file, the data is already in memory so less I/O is required.
However, 5 seconds for reading 12.5MB of data is still a lot. The only reason I can find for this is that your doubles are scattered all over the file, requiring Windows read a lot more than 12.5MB to memory.
You can avoid memory mapping altogether. If the data is stored in order in the file (not consecutive, but in order - you can read the data without seeking back), you can try avoiding the memory mapped file altogether, and just seek your way to the right place.
I doubt this will help much. Other things you can do is reorder your file, if it's at all possible, or place it on an SSD.
We have an application running on several thousand identical machines. Same OS, same hardware, same application installation. On very rare occasions, the machine locks up. Alt tab, ctrl-alt-del, application are all unresponsive. After inspecting our applications log file, a series of null characters are written to the end, as the last data before the crash.
I'm hoping to use this fact as a means to debug the lockup. My guess is that the number of null characters written is equivalent to the space I need to allocate for my log statement, but the content is never actually written to disk. I'm also guessing a disk IO problem occurred, prevent the write, and of course, the OS lockup. I can't confirm of this. So I guess my question is - have you ever seen a condition like this, how did it occur, and how might you go about troubleshooting it?
NTFS does not journal data (only metadata), so things like that can happen. The reason why is just that at the time of the crash/hang, the metadata (file size, data block allocation) was committed, but not the data (data block contents). Unfortunately this is normal behavior with NTFS and will not give you any insight into the problem causing the hang.
So the answer is: a crash at the "right" time can cause this.
BTW: The same thing can of course happen with FAT/FAT32.
I've seen this type of thing happen, I think you're looking in the right general direction.
When this happens I assume you're able to pinpoint the exact hardware? after failure I'd recommend running a memtest (http://www.memtest.org/).
I've seen this sort of thing with power supplies, bad disk controllers, etc. You can go insane trying to track them down.
Seems like you're going about this the right way - see if you can find a way to force the problem to happen more quickly, when it happens run the memtest, run chkdsk /R (check the eventlog for controller errors during this)
any chance you could get a kernel debugger attached?
any chance %SystemRoot%\memory.dmp was produced?