django how to secure my project - django

I work on a project where I'll have to configure remotely some products.
These products have a sim card and can connect on internet through gsm so I won't be able to connect to them directly.
Customers will connect on my website to make configuration requests on their products by filling forms, when they are done, I save in DB all products new config and I will send a sms to each products, the sms make the products know that they have to connect with http to an url specified in the sms, when they connect to the url I identify the product with their serial number and send their new configurations.
Now I don't really know how to secure all the data sended in the sms or the manage the authentification from the product to my server.
I thought to make an authentification based on a MD5/SHA HASH formula, but the problem is that the secret used to hash will be stored in the product and if the secret and the formula gets to be known, it will become critical.
Maybe using dynamic hash with date of the day with product infos would be better. HTTPS could resolve everything maybe but I just can't only use the serial number of the product as authorisation.
I thought also to add a private key infra but I think its too complex.
I would encrypt data with product public key for example and the product would encrypt data with my server public key, but I don't know if its too hard to implement this architecture.
I use django framework, ngnix.

I think your confusing what a 'hash' is, with encrypting/decrypting data. A hash is typically one way, with no means to (easily) 'unhash' the information that has been transferred.
The weakest link in your design is the very limited SMS (text messaging) system for transferring the data. It lacks the ability to use a large/full character set (such as binary) - which means encrypted messages would have to be quite long to be secure. But it also limits the length of messages, so these long encrypted messages could not be sent.
As mentioned in the comments, this question is too general to provide a specific answer to. But if these were 'smart' devices, a dedicated configuration 'app' - using SSL - would likely be your best solution.

Related

Security techniques to identify the client, when there is no login function

I'm working on my own "Auto update service", to support automatic updates for every desktop application I create.
Below is my basic idea.
Client
A program that can be run as an independent process that included in every products I make.
When my product runs, it runs first and queries the server for a newer version of my product.
If there is a new version, it downloads the binary file from the server and replaces it to my product.
All of the above processes should not require any user input other than choosing whether or not to proceed with the update.
Server
Product-id are stored in database.
For each product, binary files and release information of each version are stored.
Support querying product and version with REST-style http request and send binary file.
On the server, I need to do something to check that the requesting client is a valid one. So I came up with a way to issue a secret key to each distribution of my product, just like a game CD key, and check it in header of http request. This is the best I've come up with, but I'm still concerned:
On server-side, is it safe to store secret keys in database? If not, how the server stores and remember them?
On client-side, is it safe to store secret keys in client? What if an attacker tries to decompile the client program?
Any other better ideas?
I am new to develop web services so I don't have much knowledge. please understand.

developing custodial vs non-custodial wallets

This is a very broad and general question so I'm going to specify my intended use case and branch several questions, mainly referring to the implementation of each approach.
In short, users using my wallet are going to constantly send to each other, and perhaps receive/send from/to other wallets and networks, and I'm mentioning this in case it could provide an overview of how transactions will take place in my app.
so to start with the custodial wallets:from what i know, most custodial wallets have 1 cold wallet and 1 hot wallet and a hot wallet to every user, so when a user creates an account keys are automatically generated to that user, but my question is how are the users' keys stored: is it in a normal db or do they do it in another way, and how does this model work like how do they technically use the cold and hot wallets.
moving to non-custodial wallets,i want to know basically the same thing, how and where are the users' keys stored? and how are they accessed? and in case i go with this approach i am still able to impose tx fees on tx happening on my app?
I hope I made sense in what I said, and I hope to check your answers, If you feel like you know an answer to some part of the question but not all that okay say what you know as any contribution would be great, and if anyone is up for a discord/zoom call i would really appreciate it. Thanks in advance, and please let me know in case you need extra info to answer.
custodial wallets
how are the users' keys stored
It depends on each app implementation. A good practice is to store sensitive data (such as private keys) in a secrets management system. It's usually an encrypted database with advanced access control - allowing access to groups of data based on user group policies, generating single-use or time-sensitive tokens for accessing the data, ... The application can then request the private key from the SMS using the user's unique token.
non-custodial wallets
how and where are the users' keys stored? and how are they accessed?
Software wallets (including browser extensions) usually store private keys in a file, located in your computer, encrypted by a master key. The master key can be for example the hash of your MetaMask password and some predefined salt. When you unlock the wallet by entering the correct password, the wallet software decrypts the file containing the actual private keys, and then it's able to use the private keys.
Hardware wallets store private keys on the device, encrypted by a master key as well (e.g. your device PIN and a salt). It's a common practice that private keys never leave the device. So the UI software usually sends a request to sign a raw transaction data to the device, the device then asks the user to enter their pin, performs the signature on the actual device and returns the signed transaction back to the UI software.

How to (programmatically or by other means) encrypt or protect customer data

I am working on a web project and I want to (as far as possible) handle user data in a way that reduces damage to the users privacy in case of someone compromising our servers/databases.
Of course we only have user dat'a that is needed for the website to do it's job but because of the nature of the project we have quite a bit of information on our users (part of the functionality is to apply yourself to jobs and sending your cv with it)
We thought about encrypting/decrypting sensitive data with a private/public keypair of which the private key is encrypted with the users password but found some security and implementation problems with that :P
the question is how do you implement user privacy and a protection against data theft on centralised web sever with browser compatible protocols while for functionality it is required that users can exchange sensible data?
To give some additional insight: this project is not yet in production stage so there is still time to make things right.
we are already doing some basic stuff like
serving https
enforcing https for sites that may handle sensitive data
hashing salted passwords
some hardening of our server and services on it
encrypted harddrives to prevent someone from reading all client information after stealing our servers / harddrives
but that's about it, there is besides the password hashes no mechanism that would stop/at least make it harder for someone who managed to get into (part of) the server to gain all data on all our users. Nor do we see a way to encrypt user data to disable our self from reading them as we need the data (we wouldn't have collected it otherwise) for some part of the website / the functionality we want it to provide. Even if we for example managed somehow (maybe with some javascript) that all data would get to us encrypted (by the client's browser) and we serve the client his privatekey encrypted with some passphrase (like for example his login password) we could not for examle scan user uploaded files for viruses and the like. On the other hand would a client side encryption at least with the browser/webserver concept leave some issues with security at least as we imagine it (you are welcome to prove me wrong) and seems quite like reinventing the wheel, and maybe as this project is not primarily about privacy, but rather privacy is a prefarable property we might not want to reinvent the wheel for it. I strongly believe I am not the first webdeveloper thinking about this, am I? So what have other projects done? What have you done to try to protect your users data?
if relevant we are using django and postrgreSQL for most things and javascript for some UI
The common way to deal with this issue is to split (partition) your data.
Keep minimal data on the Internet-facing web server and pass any sensitive data as quickly as possible to another server that is kept inside a second firewall. Often, data is pulled from the web server by the internal secure server to further increase security. This is how banks and finance houses handle sensitive data from the internet (or at least they should). There is even a set of standards (PCI) that cover the secure handling of credit card transactions that explain all of this in mind-numbing detail.
To further secure the internal server, you can put it on a separate network and secure physical access to it. You can also focus other security tools on it such as Data Loss Protection and Intrusion Protection.
In addition, if you have any data that you don't need to see in the clear, use a client-side encryption library to encrypt it locally. There are still risks of course since the users workstation might be compromised by malware but it still removes risks during data transmission and from server storage risks. It also puts responsibility onto the user rather than just on to your central servers.
You already seem to be a long way ahead of most web developers in ensuring that your customers are kept safe and secure. One other small change it would be worth considering would be to turn on enforced HTTPS for all transactions with your site. That way, there is very little chance of unexpected data leakage such as data being unexpectedly cached.
UPDATE:
Client side encryption can help a lot since it puts the encryption responsibility on the user. Check out LastPass for example. Without doing the encryption client-side, you could never trust the service. Similarly with backup services where you set your key locally so that the backups can never be unlocked by someone on the server - they never have the key.
Partitioning is one of the primary methods for enterprises to secure services that have Internet facing components. As I said, typically, the secure server PULLs data from the less secure one so the less secure server can never have any access to anything more secure even if fully compromised. Indeed there will be a firewall that prevents any traffic from the DMZ (where the less secure service is located) getting to the secure network. Only connections from the secure side are allowed through and they will be tightly controlled by security processes. In a typical bank or other high security setting, you may well find several layers like this, each of which having separate security controls, all partitioned from each other enforcing separation of data and security.
Hope that adds some clarity. Continue to ask if not!
UPDATE 2:
Even for simple, low cost setups, I would still recommend partitioning. For a low cost version, consider having two virtual servers with the dedicated firewall replaced by careful control of the software firewall on the more secure server. Follow the same principals outlined above for everything else.

Access control in Datomic

When writing an application based on Datomic and Clojure, it seems the peers have unrestricted access to the data. How do I build a multi-user system where user A cannot access data that is private to user B?
I know I can write the queries in Clojure such that only user A's private data is returned... but what prevents a malicious user from hacking the binaries to see user B's private data?
UPDATE
It seems the state of Clojure/Datomic applications is in fact lacking in
security based on the answer from #Thumbnail and the link to John P Hackworth's blog.
Let me state more clearly the problem I see, because I don't see any solution to this and it is the original problem that prompted this question.
Datomic has a data store, a transactor, and peers. The peers are on the user's computer and run the queries against data from the data store. My question is:
how to restrict access to data in the data store. Since the data store is dumb
and in fact just stores the data, I'm not sure how to provide access controls.
When AWS S3 is used as a data store the client (a peer) has to authenticate
before accessing S3, but once it is authenticated doesn't the peer have access
to all the data!? Limitted only be the queries that it runs, if the user wants
to get another user's data they can change the code, binary, in the client so
that the queries run with a different user name, right? To be clear... isn't
the access control just a condition on the query? Or are there user specific
connections that the data store recognizes and the data store limits what data
is visible?
What am I missing?
In a traditional web framework like Rails, the server side code restricts all
access to data and authenticates and authorizes the user. The user can change
the URLs or client side code but the server will not allow access to data unless
the user has provided the correct credentials.
Since the data store in Datomic is dumb, it seems it lacks the ability to
restrict access on a per user basis and the application (peer) must do this. I
do not want to trust the user to behave and not try to acquire other users'
information.
A simple example is a banking system. Of course the user will be
authenticated... but after that, what prevents them from modifying the client
side code/binary to change the data queries to get other users' account
information from the data store?
UPDATE - MODELS
Here are two possible models that I have of how Datomic and Clojure work... the first one is my current model (in my head).
user's computer runs client/peer that has the queries and complete access to the data store where user was authenticated before the client started thereby restricting users to those that we have credentials for.
user's computer has an interface (webapp) that interacts with a peer that resides on a server. The queries are on the server and cannot be modified by the user, thereby access controls are under access control themselves by the security of the server running the peer.
If the second model is the correct one, then my question is answered: the user cannot modify the server code and the server code contains the access controls... therefore, the "peers" which I thought resided on the user's computer actually reside on the application server.
Your second model is the correct one. Datomic is designed so that peers, transactor and storage all run within a trusted network boundary in software and on hardware you control. Your application servers run the peer library, and users interact with your application servers via some protocol like HTTP. Within your application, you should provide some level of user authentication and authorization. This is consistent with the security model of most applications written in frameworks like Rails (i.e. the end user doesn't require database permissions, but rather application permissions).
Datomic provides a number of very powerful abstractions to help you write your application-level auth(n|z) code. In particular, since transactions are first-class entities, Datomic provides the ability to annotate your transactions at write-time (http://docs.datomic.com/transactions.html) with arbitrary facts (e.g. the username of the person responsible for a given transaction, a set of groups, etc.). At read-time, you can filter a database value (http://docs.datomic.com/clojure/index.html#datomic.api/filter) so that only facts matching a given predicate will be returned from queries and other read operations on that database value. This allows you to keep authz concerns out of your query logic, and to layer your security in consistently.
As I understand it ... and that's far from completely ... please correct me if I'm wrong ...
The distinguishing feature of Datomic is that the query engine, or large parts of it, reside in the database client, not in the database server. Thus, as you surmise, any 'user' obtaining programmatic access to a database client can do what they like with any of the contents of the database.
On the other hand, the account system in the likes of Oracle constrains client access, so that a malicious user can only, so to speak, destroy their own data.
However, ...
Your application (the database client) need not (and b****y well better not!) provide open access to any client user. You need to authenticate and authorize your users. You can show the client user the code, but provided your application is secure, no malicious use can be made of this knowledge.
A further consideration is that Datomic can sit in front of a SQL database, to which constrained access can be constructed.
A web search turned up Chas. Emerick's Friend library for user authentication and authorization in Clojure. It also found John P Hackworth's level-headed assessment that Clojure web security is worse than you think.
You can use transaction functions to enforce access restrictions for your peers/users. You can put data that describes your policies into the db and use the transaction function(s) to enforce them. This moves the mechanism and policy into the transactor. Transactions that do not meet the criteria can either fail or simply result in no data being transacted.
Obviously you'll need some way to protect the policy data and transaction functions themselves.

How can I uniquely identify a desktop application making a request to my API?

I'm fleshing out an idea for a web service that will only allow requests from desktop applications (and desktop applications only) that have been registered with it. I can't really use a "secret key" for authentication because it would be really easy to discover and the applications that use the API would be deployed to many different machines that aren't controlled by the account holder.
How can I uniquely identify an application in a cross-platform way that doesn't make it incredibly easy for anyone to impersonate it?
You can't. As long as you put information in an uncontrolled place, you have to assume that information will be disseminated. Encryption doesn't really apply, because the only encryption-based approaches involve keeping a key on the client side.
The only real solution is to put the value of the service in the service itself, and make the desktop client be a low-value way to access that service. MMORPGs do this: you can download the games for free, but you need to sign up to play. The value is in the service, and the ability to connect to the service is controlled by the service (it authenticates players when they first connect).
Or, you just make it too much of a pain to break the security. For example, by putting a credential check at the start and end of every single method. And, because eventually someone will create a binary that patches out all of those checks, loading pieces of the application from the server. With credentials and timestamp checks in place, and using a different memory layout for each download.
You comment proposes a much simpler scenario. Companies have a much stronger incentive to protect access to the service, and there will be legal agreements in effect regarding your liability if they fail to protect access.
The simplest approach is what Amazon does: provide a secret key, and require all clients to encrypt with that secret key. Yes, rogue employees within those companies can walk away with the secret. So you give the company the option (or maybe require them) to change the key on a regular basis. Perhaps daily.
You can enhance that with an IP check on all accesses: each customer will provide you with a set of valid IP addresses. If someone walks out with the desktop software, they still can't use it.
Or, you can require that your service be proxied by the company. This is particularly useful if the service is only accessed from inside the corporate firewall.
Encrypt it (the secret key), hard-code it, and then obfuscate the program. Use HTTPS for the web-service, so that it is not caught by network sniffers.
Generate the key using hardware speciffic IDs - processor ID, MAC Address, etc. Think of a deterministic GUID.
You can then encrypt it and send it over the wire.