Secure configuration file in clients - c++

In a project we will create configuration file for each clients(Also can be sqlite in each clients instead of a configuration file). That files will include critical information like policies. Therefore end-user musn't add, delete, change that configuration file or something in the file.
I am considering to use active directory to prevent users to open folder that include my configuration file.
Is there a standart way to use secure configuration files?
EDIT:
Of course speed of reading the file is important as security
EDIT2:
I can't do that with a DB server because my policies must be accesible whithout internet connection too. A server will update that file or sqlite tables in some periods. And I am using c++.

I'm sorry to crush your hopes and dreams, but if your security is based on that configuration file on the client, you're screwed.
The configuration file is loaded and decrypted by your application and this means that values can be changed using special tools when the application runs.
If security is important, do those checks on the server, not the client.

Security is a fairly broad matter. What happens if your system is compromised? Does someone lose money? Does someone get extra points in a game? Does someone gain access to nuclear missile-launching codes? Does someones medical data get exposed to the public?
All of these are more or less important security concerns, but as you can imagine, nuclear missile-launching has higher requirements to be completely secure than some game where someone may boost their score, and money and health obviously end up somewhere in the middle of that range, with lots of other things that we could add to the list.
It also matters what type of "users" you are trying to protect against. Is it national level security experts (e.g. FBI, CIA, KGB, etc), hobby hackers, or just normal computer users? Encrypting the file will stop a regular user, and perhaps a hobby hacker, but national security experts certainly won't be foiled by that.
Ultimately, tho', if the machine holding the data also knows how to read the data, then you can not have a completely secure system. The system can be bypassed by reading the key in the code, and re-implementing any de-/encryption etc that is part of your "security". And once the data is in plain text, it can be modified and then re-encrypted and stored back.
You can of course make it more convoluted, which will mean that someone will have to have a stronger motive to work their way through your convoluted methods, but in the end, it comes down to "If the machine knows how to decrypt something, someone with access to the machine can decrypt the content".
It is up to you (and obviously your "customers" and/or "partners" whose data you are looking after), whether you think that is something you can risk or not.

Related

Where to store user file uploads?

In my compojure app, where should I store user upload files? Do I just make a user-upload dir in my project root and stick everything in there? Is there anything special I should do (classpath, permissions, etc)?
To properly answer your question, you need to think of the lifecycle of the uploaded files. I would start answering questions such as:
how big are the files going to be?
what storage options will hold enough data to store all the uploads?
how about SLAs, redundancy and disaster avoidance?
how and who to monitor the free space and health of the storage?
In general, the file system location is much less relevant than the block device sitting behind it: as long as your data is stored safely enough for your application, user-upload can be anywhere and be anything from a regular disk to an S3 bucket e.g. via s3fs-fuse.
Putting such folder in your classpath sounds odd to me. It gives no essential benefit, as you will always need to go through a configuration entry to state where to store and read files from.
Permission wise, your application will require at least write access to the upload storage (most likely read access as well). Granting such permissions depends on the physical device you choose: if you opt for the local file system as you suggest in your question, you need to make sure the Clojure app is run by a user with chmod +rw, but in case of S3, you will need to configure API keys.
For anything other than a practice problem, I would suggest using a database such as Postgres or Datomic. This way, you get the reliability of a DB with real transactions, along with the ability to access the files across a network from any location.

How to (programmatically or by other means) encrypt or protect customer data

I am working on a web project and I want to (as far as possible) handle user data in a way that reduces damage to the users privacy in case of someone compromising our servers/databases.
Of course we only have user dat'a that is needed for the website to do it's job but because of the nature of the project we have quite a bit of information on our users (part of the functionality is to apply yourself to jobs and sending your cv with it)
We thought about encrypting/decrypting sensitive data with a private/public keypair of which the private key is encrypted with the users password but found some security and implementation problems with that :P
the question is how do you implement user privacy and a protection against data theft on centralised web sever with browser compatible protocols while for functionality it is required that users can exchange sensible data?
To give some additional insight: this project is not yet in production stage so there is still time to make things right.
we are already doing some basic stuff like
serving https
enforcing https for sites that may handle sensitive data
hashing salted passwords
some hardening of our server and services on it
encrypted harddrives to prevent someone from reading all client information after stealing our servers / harddrives
but that's about it, there is besides the password hashes no mechanism that would stop/at least make it harder for someone who managed to get into (part of) the server to gain all data on all our users. Nor do we see a way to encrypt user data to disable our self from reading them as we need the data (we wouldn't have collected it otherwise) for some part of the website / the functionality we want it to provide. Even if we for example managed somehow (maybe with some javascript) that all data would get to us encrypted (by the client's browser) and we serve the client his privatekey encrypted with some passphrase (like for example his login password) we could not for examle scan user uploaded files for viruses and the like. On the other hand would a client side encryption at least with the browser/webserver concept leave some issues with security at least as we imagine it (you are welcome to prove me wrong) and seems quite like reinventing the wheel, and maybe as this project is not primarily about privacy, but rather privacy is a prefarable property we might not want to reinvent the wheel for it. I strongly believe I am not the first webdeveloper thinking about this, am I? So what have other projects done? What have you done to try to protect your users data?
if relevant we are using django and postrgreSQL for most things and javascript for some UI
The common way to deal with this issue is to split (partition) your data.
Keep minimal data on the Internet-facing web server and pass any sensitive data as quickly as possible to another server that is kept inside a second firewall. Often, data is pulled from the web server by the internal secure server to further increase security. This is how banks and finance houses handle sensitive data from the internet (or at least they should). There is even a set of standards (PCI) that cover the secure handling of credit card transactions that explain all of this in mind-numbing detail.
To further secure the internal server, you can put it on a separate network and secure physical access to it. You can also focus other security tools on it such as Data Loss Protection and Intrusion Protection.
In addition, if you have any data that you don't need to see in the clear, use a client-side encryption library to encrypt it locally. There are still risks of course since the users workstation might be compromised by malware but it still removes risks during data transmission and from server storage risks. It also puts responsibility onto the user rather than just on to your central servers.
You already seem to be a long way ahead of most web developers in ensuring that your customers are kept safe and secure. One other small change it would be worth considering would be to turn on enforced HTTPS for all transactions with your site. That way, there is very little chance of unexpected data leakage such as data being unexpectedly cached.
UPDATE:
Client side encryption can help a lot since it puts the encryption responsibility on the user. Check out LastPass for example. Without doing the encryption client-side, you could never trust the service. Similarly with backup services where you set your key locally so that the backups can never be unlocked by someone on the server - they never have the key.
Partitioning is one of the primary methods for enterprises to secure services that have Internet facing components. As I said, typically, the secure server PULLs data from the less secure one so the less secure server can never have any access to anything more secure even if fully compromised. Indeed there will be a firewall that prevents any traffic from the DMZ (where the less secure service is located) getting to the secure network. Only connections from the secure side are allowed through and they will be tightly controlled by security processes. In a typical bank or other high security setting, you may well find several layers like this, each of which having separate security controls, all partitioned from each other enforcing separation of data and security.
Hope that adds some clarity. Continue to ask if not!
UPDATE 2:
Even for simple, low cost setups, I would still recommend partitioning. For a low cost version, consider having two virtual servers with the dedicated firewall replaced by careful control of the software firewall on the more secure server. Follow the same principals outlined above for everything else.

Creating a secure configuration file that contains passwords

I am developing an application that works with PostgreSQL and other database features that require a username and password to login to the remote server to store the data. The user has to be able to set the username and password which would then be stored in a configuration file on disk. The problem is that anybody can open the configuration file and read the credentials creating a serious security problem.
I have done some research on encrypting the whole configuration file and then de-crypting it when needed, but the problem is that a hacker could put the program though a debugger and easily find out the decryption key. What is the best method to keep configuration data secret on Windows using C/C++?
The moment an Attacker is able to attach a debugger to your running program is the moment the game is over. Being able to debug your program means that your user account or the underlying OS is compromised, which means every security measure on your app's behalf is futile. The attacker will (with knowledge, persistence and motivation) know everything you enter into your computer, or have entered and stored before.
The user has to be able to set the username and password which would
then be stored in a configuration file on disk
This is the weak spot and this is what you need to change.
(On a side note, is the password you store never going to change? That's another security weak spot.)
As stated in Eugen Rieck's answer, if the attacker has physical access to your system he will, in time, break all your defenses.
The simple solution is clear: don't let him have access to the system that handles security/authorization. Have the SQL server on a dedicated, remote machine and let it handle the username/password validation.
Or, make your app multi tiered with part on a remote machine that handles the user authentication and routs your DB queries.
This will mean that your user will have to login every time they start your application.
(Preferably also after a pre-set period of inactivity.)
It all depends on how safe you need to be. It's important to understand that security is not easy to create and you should always try to use existing frameworks if possible.

Django: Securing / encrypting stored files

In a Django project, I want to keep user uploaded files secure on the server. Should this be done at the OS level (we are using ubuntu) or at the application level?
Encrypting at the application level will be easier to maintain. But, aside from some drawbacks like possible negative effect on performance, I am not even sure if this will have any point. If a hacker compromises the server, he will also have access to the encryption keys and how it is encrypted / decrypted.
Any suggestions are greatly appreciated. Thanks.
How you protect your data depends on what kinds of attacks you want to protect against. Of course, you probably don't know how an attacker is most likely to compromise your system, unless there are certain threat models you're particularly trying to protect against, like say a rogue sysadmin.
The attacker might gain access to the OS that the web server is running on. In this case, filesystem level encryption probably does you no good. In fact file-system level encryption is probably only useful protection against somebody walking off with the physical server (which is a totally valid threat model). However, if the files are encrypted with keys stored in the database, then an attacker who has access to the webserver OS but not the database is thwarted.
In contrast, an attacker might gain access to the database but not the OS, through a hole in your application. I would expect this to be less likely since modern operating systems present huge and well-studied attack surfaces.
To protect your user's data against an attacker with full access to your servers is very difficult. You need to encrypt the data with a key that your servers don't have. This could be something like a password or a key stored in a user cookie. The problem with all these schemes is that users can't be trusted to hold on to critical data like this -- they always want a way to reset their password if they forget. In most cases, it's not realistic to protect data against an attacker with full access to your OS and your database.
So I'd choose what you're trying to protect against. Personally, I'd expect an OS penetration to be most likely, and thus encrypt the files with keys that are stashed in a part of the database that is extra protected somehow. The challenge here is that the OS has to store database login credentials (in settings.py) in order for the web app to function. So try to keep those files as restricted as possible within the OS i.e. chmod 600 on a user account that does as little else as possible.
You're right that if the key used to encrypt the files is stored on the server you don't get a whole lot of added security by encrypting the files.
However, if you use a key provided by the user, then you do get some security. For example, if you store the encryption key in a cookie, then it will only be available for the duration of each request. I don't believe this will create any new security issues (if an attacker can steal the cookie, they can also steal the user's session), and it will make it much harder for an attacker to access files belonging to users who aren't currently online.
If you're really paranoid, you could do what 1Password does, and send encrypted data back to the browser, which can decrypt it with JavaScript encryption routines…

sftp versus SOAP call for file transfer

I have to transfer some files to a third party. We can invent the file format, but want to keep it simple, like CSV. These won't be big files - a few 10s of MB at most and there won't be many - 3 files per night.
Our preference for the protocol is sftp. We've done this lots in the past and we understand it well.
Their preference is to do it via a web service/SOAP/https call.
The reasons they give is reliability, mainly around knowing that they've fully received the file.
I don't buy this as a killer argument. You can easily build something into your file transfer process using sftp to make sure the transfer has completed, e.g. use headers/footers in the files, or move file between directories, etc.
The only other argument I can think of is that over http(s), ports 80/443 will be open, so there might be less firewall work for our infrastructure guys.
Can you think of any other arguments either way on this? Is there a consensus on what would be best practice here?
Thanks in advance.
File completeness is a common issue in "managed file transfer". If you went for a compromise "best practice", you'd end up running either AS/2 (a web service-ish way to transfer files that incorporates non-repudiation via signed integrity checks) or AS/3 (same thing over FTP or FTPS).
One of the problems with file integrity and SFTP is that you can't arbitrarily extend the protocol like you can FTP and FTPS. In other words, you can't add an XSHA1 command to your SFTP transfer just because you want to.
Yes, there are other workarounds (like transactional files that contain hashes of files received), but at the end of the day someone's going to have to do some work...but it really shouldn't be this hard.
If the third party you're talking to really doesn't have a non-web service call to accept large files, you might be their guinea pig as they try to navigate a brand new world. (Or, they may have jsut fired all their transmissions folks and are not just realizing that the world doesn't operate on SOAP...yet - seen that happen too.)
Either way, unless they GIVE you the magic code/utility/whatever to do the file-to-SOAP transaction for them (and that happens too), I'd stick to your sftp guns until they find the right guy on their end to talk bulk data transmissions.
SFTP is the protocol for secure file transfer, soap is an API protocol - which can be used for sending file attachments (i.e. MIME attachments), or as Base64 encoded data.
SFTP adds additional potential complexity around separate processes for encrypting/decrypting files (at-rest, if they contain sensitive data), file archiving, data latency, coordinating job scheduling, and setting-up FTP service accounts.