I'm working with a listview control which saves the data using AES encryption to a file. I need to keep the data of every item in listview in std::list class of std::string. should I just keep the data encrypted in std::list and decrypt to a local variable when its needed? or is it enough to keep it encrypted in file only?
To answer this question you need to consider who your attackers are (i.e. who are you trying to hide the data from?).
For this purpose, it helps if you work up a simple Threat Model (basically: Who you are worried about, what you want to protect, the types of attacks they may carry out, and the risks thereof).
Once this is done, you can determine if it is worth your effort to protect the data from being written to the disk (even when held decrypted only in memory).
I know this answer may not seem useful, but I hope it helps you to become aware that you need to specifically state (and hence know) you your attackers are, before you can correctly defend against them (i.e, you may end up implementing completely useless defenses, and so on).
Will you be decrypting the same item more than once? If you aren't concerned about in-memory attacks then performance might be the other issue to consider.
If you have time it may be worth coding your solution to allow for both eventualities. Therefore if you choose to cache encrypted, then it's not too much work to change to a decrypted in-memory solution later on if performance becomes an issue.
It is unclear what attack you are attempting to defend against. If the attacker has local access to the system then they can attach a debugger like OllyDBG to the process to observe its memory. The attack would be to set a break point at the call to AES and then observe the data being passed in and returned, very simple.
I agree with answer from silky that you have to start with a basic threat model. Just wanted to pointed out that when handling sensitive information in memory, you have every right to be concerned that the information may end up on disk even if you do not write it out.
For example, the data in memory could be written to swap space or could end up in a core file and from there on elsewhere (such as an email attachment or copied to other places). You can deal with these without encrypting data in memory since that may just shift the problem to dealing with key to decrypt that data...
Related
I'm working on a C++ application which is keeping some user secret keys in the RAM. This secret keys are highly sensitive & I must minimize risk of any kind of attack against them.
I'm using a character array to store these keys, I've read some contents about storing variables in CPU registers or even CPU cache (i.e using C++ register keyword), but seems there is not a guaranteed way to force application to store some of it's variables outside of RAM (I mean in CPU registers or cache).
Can anybody suggest a good way to do this or suggest any other solution to keep these keys securely in the RAM (I'm seeking for an OS-independent solution)?
Your intentions may be noble, but they are also misguided. The short answer is that there's really no way to do what you want on a general purpose system (i.e. commodity processors/motherboard and general-purpose O/S). Even if you could, somehow, force things to be stored on the CPU only, it still would not really help. It would just be a small nuisance.
More generally to the issue of protecting memory, there are O/S specific solutions to indicate that blocks memory should not be written out to the pagefile such as the VirtualLock function on Windows. Those are worth using if you are doing crypto and holding sensitive data in that memory.
One last thing: I will point out that it worries me is that you have a fundamental misunderstanding of the register keyword and its security implications; remember it's a hint and it won't - indeed, it cannot - force anything to actually be stored in a register or anywhere else.
Now, that, by itself, isn't a big deal, but it is a concern here because it indicates that you do not really have a good grasp on security engineering or risk analysis, which is a big problem if you are designing or implementing a real-world cryptographic solution. Frankly, your posts suggests (to me, at least) that you aren't quite ready to architect or implement such a system.
You can't eliminate the risk, but you can mitigate it.
Create a single area of static memory that will be the only place that you ever store cleartext keys. And create a single buffer of random data that you will use to xor any keys that are not stored in this one static buffer.
Whenever you read a key into memory, from a keyfile or something, you only read it directly into this one static buffer, xor with your random data and copy it out wherever you need it, and immediately clear the buffer with zeroes.
You can compare any two key by just comparing their masked versions. You can even compare hashes of masked keys.
If you need to operate on the cleartext key - e.g. to generate a hash or validate they key somehow load the masked xor'ed key into this one static buffer, xor it back to cleartext and use it. Then write zeroes back into that buffer.
The operation of unmasking, operating and remasking should be quick. Don't leave the buffer sitting around unmasked for a long time.
If someone were to try a cold-boot attack, pulling the plug on the hardware, and inspecting the memory chips there would be only one buffer that could possibly hold a cleartext key, and odds are during that particular instant of the coldboot attack the buffer would be empty.
When operating on the key, you could even unmask just one word of the key at a time just before you need it to validate the key such that a complete key is never stored in that buffer.
#update: I just wanted to address some criticisms in the comments below:
The phrase "security through obscurity" is commonly misunderstood. In the formal analysis of security algorithms "obscurity" or methods of hiding data that are not crytpographically secure do not increase the formal security of a cryptographic algorithm. And it is true in this case. Given that keys are stored on the users machine, and must be used by that program on that machine there is nothing that can be done to make the keys on this machine cryptographically secure. No matter what process you use to hide or lock the data at some point the program must use it, and a determined hacker can put breakpoints in the code and watch when the program uses the data. But no suggestion in this thread can eliminate that risk.
Some people have suggested that the OP find a way to use special hardware with locked memory chips or some operating system method of locking a chip. This is cryptographically no more secure. Ultimately if you have physical access to the machine a determined enough hacker could use a logic analyzer on the memory bus and recover any data. Besides the OP has stated that the target systems don't have such specialized hardware.
But this doesn't mean that there aren't things you can do to mitigate risk. Take the simplest of access keys- the password. If you have physical access to a machine you can put in a key logger, or get memory dumps of running programs etc. So formally the password is no more secure than if it was written in plaintext on a sticky note glued to the keyboard. Yet everyone knows keeping a password on a sticky note is a bad idea, and that is is bad practice for programs to echo back passwords to the user in plaintext. Because of course practically speaking this dramatically lowers the bar for an attacker. Yet formally a sticky note with a password is no less secure.
The suggestion I make above has real security advantages. None of the details matter except the 'xor' masking of the security keys. And there are ways of making this process a little better. Xor'ing the keys will limit the number of places that the programmer must consider as attack vectors. Once the keys are xord, you can have different keys all over your program, you can copy them, write them to a file, send them over the network etc. None of these things will compromise your program unless the attacker has the xor buffer. So there is a SINGLE BUFFER that you have to worry about. You can then relax about every other buffer in the system. ( and you can mlock or VirtualLock that one buffer )
Once you clear out that xor buffer, you permanently and securely eliminate any possibility that an attacker can recover any keys from a memory dump of your program. You are limiting your exposure both in terms of the number of places and the times that keys can be recovered. And you are putting in place a system that allows you to work with keys easily without worrying during every operation on an object that contains keys about possible easy ways the keys can be recovered.
So you can imagine for example a system where keys refcount the xor buffer, and when all key are no longer needed, you zero and delete the xor buffer and all keys become invalidated and inaccessible without you having to track them down and worry about if a memory page got swapped out and still holds plaintext keys.
You also don't have to literally keep around a buffer of random data. You could for example use a cryptographically secure random number generator, and use a single random seed to generate the xor buffer as needed. The only way an attacker can recover the keys is with access to the single generator seed.
You could also allocate the plaintext buffer on the stack as needed, and zero it out when done such that it is extremely unlikely that the stack ever leaves on chip cache. If the complete key is never decoded, but decoded one word at a time as needed even access to the stack buffer won't reveal the key.
There is no platform-independent solution. All the threats you're addressing are platform specific and thus so are the solutions. There is no law that requires every CPU to have registers. There is no law that requires CPUs to have caches. The ability for another program to access your program's RAM, in fact the existence of other programs at all, are platform details.
You can create some functions like "allocate secure memory" (that by default calls malloc) and "free secure memory" (that by default calls memset and then free) and then use those. You may need to do other things (like lock the memory to prevent your keys from winding up in swap) on platforms where other things are needed.
Aside from the very good comments above, you have to consider that even IF you succeed in getting the key to be stored in registers, that register content will most likely get stored in memory when an interrupt comes in, and/or when another task gets to run on the machine. And of course, someone with physical access to the machine can run a debugger and inspect the registers. Debugger may be an "in circuit emulator" if the the key is important enough that someone will spent a few thousand dollars on such a device - which means no software on the target system at all.
The other question is of course how much this matters. Where are the keys originating from? Is someone typing them in? If not, and are stored somewhere else (in the code, on a server, etc), then they will get stored in the memory at some point, even if you succeed in keeping them out of the memory when you actually use the keys. If someone is typing them in, isn't the security risk that someone in one way or another, forces the person(s) knowing the keys to reveal the keys?
As others have said, there is no secure way to do this on a general purpose computer. The alternative is to use a Hardware Security Module (HSM).
These provide:
greater physical protection for the keys than normal PCs/servers (protecting against direct access to RAM);
greater logical protection as they are not general purpose - no other software is running on the machine so no other processes/users have access to the RAM.
You can use the HSM's API to perform the cryptographic operations you need (assuming they are somewhat standard) without ever exposing the unencrypted key outside of the HSM.
If your platform supports POSIX, you would want to use mlock to prevent your data from being paged to the swap area. If you're writing code for Windows, you can use VirtualLock instead.
Keep in mind that there's no absolute way to protect the sensitive data from getting leaked, if you require the data to be in its unencrypted form at any point in time in the RAM (we're talking about plain ol' RAM here, nothing fancy like TrustZone). All you can do (and hope for) is to minimize the amount of time that the data remains unencrypted so that the adversary will have lesser time to act upon it.
If yours is an user mode application and the memory you are trying to protect is from other user mode processes try CryptProtectMemory api (not for persistant data).
As the other answers mentioned, you may implement a software solution but if your program runs on a general purpose machine and OS and the attacker has access to your machine it will not protect your sensitive data. If you data is really very sensitive and an attacker can physically access the machine a general software solution won't be enough.
I once saw some platforms dealing with very sensible data which had some sensors to detect when someone was accessing the machine physically, and which would actively delete the data when that was the case.
You already mentioned cold boot attack, the problem is that the data in RAM can be accessed until minutes after shut down on general RAM.
im writing a server in c++ that will handle safe connections where sensitive data will be sent.
the goal is never saving the data in unencrypted form anywhere outside memory, and keeping it at a defined space in the memory (to be overwritten after its no longer needed)
will allocating a large chunk of memory and using it to store the sensitive data be sufficient and ensure that there is no leakage of data ?
From the manual of a tool that handles passwords:
It's also debatable whether mlock() is a proper way to protect sensitive
information. According to POSIX, mlock()-ing a page guarantees that it
is in memory (useful for realtime applications), not that it isn't
in the swap (useful for security applications). Possibly an encrypted
swap partition (or no swap partition) is a better solution.
However, Linux does guarantee that it is not in the swap and specifically discusses the security applications. It also mentions:
But be aware that the suspend mode on laptops and some desktop computers will
save a copy of the system's RAM to disk, regardless of memory locks.
Why don't you use SELinux? Then no process can access other stuff unless you tell it can.
I think if you are securing a program handling sensitive data, you should start by using a secure OS. If the OS is not secure enough then there is nothing your application can do to fix that.
And maybe when using SELinux you don't have to do anything special in your application making your application smaller, simpler and also more secure?
What you want is locking some region of memory into RAM. See the manpage for mlock(2).
Locking the memory (or, if you use Linux, using large pages, since these cannot be paged out) is a good start. All other considerations left aside, this does at least not write plaintext to harddisk in unpredictable ways.
Overwriting memory when no longer needed does not hurt, but is probably useless, because
any pages that are reclaimed and later given to another process will be zeroed out by the operating system anyway (every modern OS does that)
as long as some data is on a computer, you must assume that someone will be able to steal it, one way or the other
there are more exploits in the operating system and in your own code than you are aware of (this happens to the best programmers, and it happens again and again)
There are countless concerns when attempting to prevent someone from stealing sensitive data, and it is by no means an easy endeavour. Encrypting data, trying not to have any obvious exploits, and trying to avoid the most stupid mistakes is as good as you will get. Beyond that, nothing is really safe, because for every N things you plan for, there exists a N+1 thing.
Take my wife's work laptop as a parade example. The intern setting up the machines in their company (at least it's my guess that he's an intern) takes every possible measure and configures everything in paranoia mode to ensure that data on the computer cannot be stolen and that working becomes as much of an ordeal as possible. What you end up with is a bitlocker-protected computer that takes 3 passwords to even boot up, and on which you can practically do nothing, and a screensaver that locks the workstation every time you pick up the phone and forget shaking the mouse. At the same time, this super secure computer has an enabled firewire port over which everybody can read and write anything in the computer's memory without a password.
I've got some data that I want to save on Amazon S3. Some of this data is encrypted and some is compressed. Should I be worried about single bit flips? I know of the MD5 hash header that can be added. This (from my experience) will prevent flips in the most unreliable portion of the deal (network communication), however I'm still wondering if I need to guard against flips on disk?
I'm almost certain the answer is "no", but if you want to be extra paranoid you can precalculate the MD5 hash before uploading, compare that to the MD5 hash you get after upload, then when downloading calculate the MD5 hash of the downloaded data and compare it to your stored hash.
I'm not sure exactly what risk you're concerned about. At some point you have to defer the risk to somebody else. Does "corrupted data" fall under Amazon's Service Level Agreement? Presumably they know what the file hash is supposed to be, and if the hash of the data they're giving you doesn't match, then it's clearly their problem.
I suppose there are other approaches too:
Store your data with an FEC so that you can detect and correct N bit errors up to your choice of N.
Store your data more than once in Amazon S3, perhaps across their US and European data centers (I think there's a new one in Singapore coming online soon too), with RAID-like redundancy so you can recover your data if some number of sources disappear or become corrupted.
It really depends on just how valuable the data you're storing is to you, and how much risk you're willing to accept.
I see your question from two points of view, a theoretical and practical.
From a theoretical point of view, yes, you should be concerned - and not only about bit flipping, but about several other possible problems. In particular section 11.5 of the customer agreements says that Amazon
MAKE NO REPRESENTATIONS OR WARRANTIES OF ANY KIND, WHETHER EXPRESS, IMPLIED, STATUTORY OR OTHERWISE WITH RESPECT TO THE SERVICE OFFERINGS. (..omiss..) WE AND OUR LICENSORS DO NOT WARRANT THAT THE SERVICE OFFERINGS WILL FUNCTION AS DESCRIBED, WILL BE UNINTERRUPTED OR ERROR FREE, OR FREE OF HARMFUL COMPONENTS, OR THAT THE DATA YOU STORE WITHIN THE SERVICE OFFERINGS WILL BE SECURE OR NOT OTHERWISE LOST OR DAMAGED.
Now, in practice, I'd not be concerned. If your data will be lost, you'll blog about it and (although they might not face any legal action), their business will be pretty much over.
On the other hand, that depends on how much vital your data is. Suppose that you were rolling your own stuff in your own data center(s). How would you plan for disaster recovery there? If you says: I'd just keep two copies in two different racks, just use the same technique with Amazon, maybe keeping two copies in two different datacenters (since you wrote that you are not interested in how to protect against bit flips, I'm providing only a trivial example here)
Probably not: Amazon is using checksums to protect against bit flips, regularly combing through data at rest, ensuring that no bit flips have occurred. So, unless you have corruption in all instances of the data within the interval of integrity check loops you should be fine.
Internally, S3 uses MD5 checksums throughout the system to detect/protect against bitflips. When you PUT an object into S3, we compute the MD5 and store that value. When you GET an object we recompute the MD5 as we stream it back. If our stored MD5 doesn't match the value we compute as we're streaming the object back we'll return an error for the GET request. You can then retry the request.
We also continually loop through all data at rest, recomputing checksums and validating them against the MD5 we saved when we originally stored the object. This allows us to detect and repair bit flips that occur in data at rest. When we find a bit flip in data at rest, we repair it using the redundant data we store for each object.
You can also protect yourself against bitflips during transmission to and from S3 by providing an MD5 checksum when you PUT the object (we'll error if the data we received doesn't match the checksum) and by validating the MD5 when GET an object.
Source:
https://forums.aws.amazon.com/thread.jspa?threadID=38587
There are two ways of reading your question:
"Is Amazon S3 perfect?"
"How do I handle the case where Amazon S3 is not perfect?"
The answer to (1) is almost certainly "no". They might have lots of protection to get close, but there is still the possibility of failure.
That leaves (2). The fact is that devices fail, sometimes in obvious ways and other times in ways that appear to work but give an incorrect answer. To deal with this, many databases use a per-page CRC to ensure that a page read from disk is the same as the one that was written. This approach is also used in modern filesystems (for example ZFS, which can write multiple copies of a page, each with a CRC to handle raid controller failures. I have seen ZFS correct single bit errors from a disk by reading a second copy; disks are not perfect.)
In general you should have a check to verify that your system is operating is you expect. Using a hash function is a good approach. What approach you take when you detect a failure depends on your requirements. Storing multiple copies is probably the best approach (and certainly the easiest) because you can get protection from site failures, connectivity failures and even vendor failures (by choosing a second vendor) instead of just redundancy in the data itself by using FEC.
There's a SecureZeroMemory() function in WinAPI that is designed for erasing the memory used for storing passwords/encryption keys/similar stuff when the buffer is no longer needed. It differs from ZeroMemory() in that its call will not be optimized out by the compiler.
Is it really so necessary to erase the memory used for storing sensitive data? Does it really make the application more secure?
I understand that data could be written into swapfile or into hibernation file and that other processes could possibly read my program's memory. But the same could happen with the data when it is still in use. Why is use, then erase better than just use?
It does. Hibernation file is not encrypted, for example. And if you don't securely clear the memory, you might end up with trouble. It's just a single example, though. You should always hold secret stuff in memory only as long as needed.
It exists for a reason. :)
If you keep sensitive data in memory, then other processes can potentially read it.
Of course, in your application, passwords or other secure data may not be so critical that this is required. But in some applications, it's pretty essential that malicious code can't just snoop your passwords or credit card numbers or whatever other data the application uses.
Also note that it might be that some OS'es will not zero memory before giving it to an application, this means that an application might randomly request memory, scan it for possibly interesting content and do something with it.
If that application would only get zero'd memory, of course it would have a harder time trying to get interesting data.
SecureZeroMemory() will certainly not make your application perfectly secure. The fact that the password was already in memory is already a security hole. Using SecureZeroMemory() will definitely make it less likely that your password can be retrieved. I don't see any reason not to use it, so why not? Just remember that there are many other things you have to worry about too.
If you actually have password data or other secrets, you're also going to want to make sure that the memory they are in doesn't get swapped out, otherwise the swap file can become a problem (I think the function you want is 'VirtualLock' for a windows app). Further you'll need to detect windows going into hibrenate and wipe the data at that point too. I believe Windows will send a message to every app when it's about to hibrenate.
My app keeps track of the state of about 1000 objects. Those objects are read from and written to a persistent store (serialized) in no particular order.
Right now the app uses the registry to store each object's state. This is nice because:
It is simple
It is very fast
Individual object's state can be read/written without needing to read some larger entity (like pulling out a snippet from a large XML file)
There is a decent editor (RegEdit) which allow easily manipulating individual items
Having said that, I'm wondering if there is a better way. SQLite seems like a possibility, but you don't have the same level of multiple-reader/multiple-writer that you get with the registry, and no simple way to edit existing entries.
Any better suggestions? A bunch of flat files?
If what you mean by 'multiple-reader/multiple-writer' is that you keep a lot of threads writing to the store concurrently, SQLite is threadsafe (you can have concurrent SELECTs and concurrent writes are handled transparently). See the [FAQ [1]] and grep for 'threadsafe'
[1]: http://www.sqlite.org/faq.html/ FAQ
If you do begin to experiment with SQLite, you should know that "out of the box" it might not seem as fast as you would like, but it can quickly be made to be much faster by applying some established optimization tips:
SQLite optimization
Depending on the size of the data and the amount of RAM available, one of the best performance gains will occur by setting sqlite to use an all-in-memory database rather than writing to disk.
For in-memory databases, pass NULL as the filename argument to sqlite3_open and make sure that TEMP_STORE is defined appropriately
On the other hand, if you tell sqlite to use the harddisk, then you will get a similar benefit to your current usage of RegEdit to manipulate the program's data "on the fly."
The way you could simulate your current RegEdit technique with sqlite would be to use the sqlite command-line tool to connect to the on-disk database. You can run UPDATE statements on the sql data from the command-line while your main program is running (and/or while it is paused in break mode).
I doubt any sane person would go this route these days, however some of what you describe could be done with Window's Structured/Compound Storage. I only mention this since you're asking about Windows - and this is/was an official Windows way to do this.
This is how DOC files were put together (but not the new DOCX format). From MSDN it'll appear really complicated, but I've used it, it isn't the worst API in Win32.
it is not simple
it is fast, I would guess it's faster then the registry.
Individual object's state can be read/written without needing to read some larger entity.
There is no decent editor, however there are some real basic stuff (VC++ 6.0 had the "DocFile Viewer" under Tools. (yeah, that's what that thing did) I found a few more online.
You get a file instead of registry keys.
You gain some old-school Windows developer geek-cred.
Other random thoughts:
I think XML is the way to go (despite the random access issue). Heck, INI files may work. The registry gives you very fine grain security if you need it - people seem to forget this when the claim using files are better. An embedded DB seems like overkill if I'm understanding what you're doing.
Do you need to persist the objects on each change event or just in memory and store on shutdown? If so, just load them up and serialize them at the end, assuming your app runs for a long time (and you don't share that state with another program) then in memory is going to be a winner.
If you've got fixed size structures then you could consider just using a memory mapped file and allocate memory from that?
If the only thing you do is serialize/deserialize individual objects (no fancy queries), then use a btree database, for example Berkeley DB. It is very fast at storing and retrieving chunks of data by key (I assume your objects have some id that can be used as a key) and access by multiple processes is supported.