This is a very broad and general question so I'm going to specify my intended use case and branch several questions, mainly referring to the implementation of each approach.
In short, users using my wallet are going to constantly send to each other, and perhaps receive/send from/to other wallets and networks, and I'm mentioning this in case it could provide an overview of how transactions will take place in my app.
so to start with the custodial wallets:from what i know, most custodial wallets have 1 cold wallet and 1 hot wallet and a hot wallet to every user, so when a user creates an account keys are automatically generated to that user, but my question is how are the users' keys stored: is it in a normal db or do they do it in another way, and how does this model work like how do they technically use the cold and hot wallets.
moving to non-custodial wallets,i want to know basically the same thing, how and where are the users' keys stored? and how are they accessed? and in case i go with this approach i am still able to impose tx fees on tx happening on my app?
I hope I made sense in what I said, and I hope to check your answers, If you feel like you know an answer to some part of the question but not all that okay say what you know as any contribution would be great, and if anyone is up for a discord/zoom call i would really appreciate it. Thanks in advance, and please let me know in case you need extra info to answer.
custodial wallets
how are the users' keys stored
It depends on each app implementation. A good practice is to store sensitive data (such as private keys) in a secrets management system. It's usually an encrypted database with advanced access control - allowing access to groups of data based on user group policies, generating single-use or time-sensitive tokens for accessing the data, ... The application can then request the private key from the SMS using the user's unique token.
non-custodial wallets
how and where are the users' keys stored? and how are they accessed?
Software wallets (including browser extensions) usually store private keys in a file, located in your computer, encrypted by a master key. The master key can be for example the hash of your MetaMask password and some predefined salt. When you unlock the wallet by entering the correct password, the wallet software decrypts the file containing the actual private keys, and then it's able to use the private keys.
Hardware wallets store private keys on the device, encrypted by a master key as well (e.g. your device PIN and a salt). It's a common practice that private keys never leave the device. So the UI software usually sends a request to sign a raw transaction data to the device, the device then asks the user to enter their pin, performs the signature on the actual device and returns the signed transaction back to the UI software.
Related
Taking the following service description:
X is a platform matching buyers and sellers.
Buyers can join the platform by creating a buyer account and browse seller shops, buy, manage their account, ..., on the Buyers client application.
Sellers can join the platform by creating a seller account and manage their shops and orders, ..., on the Buyers client application.
I am still confuse about the right approach to adopt.
Here I represented the organization X (the platform). I assume that a buyer is not considered as an organization but rather a user of X. So every time a buyer create an account, I register a user under X, save email and password on an external database and link this entry to a user in X's wallet.
A seller can be considered as an organization (at least to me but happy to debate on that). So every time a seller create an account, I have to create an add a new organization to the existing network. They will however share the same "Seller application", also using a email/password approach.
In most of the sample under the Hyperledger Fabric repo, there is like 3-4 organizations at the start of the network and it is quite painful to add one more to an existing network. In my case, I could end up with 1 million organization or an infinite if the service is a success. Can this scale?
Is it the correct approach for this kind of use case? Any feedback or resource related to this use case is welcome.
This doesn't look like a valid use of hyper-ledger fabric. The blockchain is optimized to store transactional information. It isn't a regular DB, if you try, for instance, to store "user profiles" you will have a hard time trying so. For instance, each member for the blockchain network (again, hyper-ledger fabric) is meant to keep a copy of the ledger. Thus, everyone would get access to all user profiles. You can play around with PDC (private data), or as you mention, having virtually infinite users created on a single organization, but that isn't really how it's supposed to be used..
So, again, hyper-ledger fabric is meant to store transactional information (ledger relates to transaction). I think whatever strategy you try to implement for your use case, you should keep buyer/seller profiles/information off chain, and use the ledger only for transactional information that members of the network can see. In this scenario Fabric would server as an audit trail system, adding trust to each operation between buyers/sellers.
So I'm new to the whole cloud computing infrastructure and I'm trying to grasp the best practices and today it came to my mind. How do I store sensitive data in AWS what services do I need to utilize and what architecture shall I build for it, I wrote a scenario down to further explain my question.
Let's say I have a user registration and I need from every user to input a secret key that I need to access some kind of 3rd party service on their behalf (let's assume that it's the only to access that service and no other way to access it) how do I store it in my database let's say RDS for example without compromising other IAM users from accessing the database but all they say is an encrypted secret key not the plain text.
I searched online and found some saying KMS and some saying Secrets Manager some saying Backend Encrypting and some says Frontend Encrypting which way shall I go with?
Whoever decides to answer this question thanks in advance but please elaborate as much as you can because I'm still trying the get the concepts and trying to leverage the "Cloud" capabilities as much as possible.
Two common approaches would be to encrypt the secret key at either a) the application level, or b) the database level. To encrypt the key inside your application, you would use some reliable encryption method, such as SHA-256 or SHA-512. The key would be encrypted and non accessible even before you write it out to your database as binary content. To encrypt at the database level, there are a number of options, depending on your particular database. If your RDBMS support encrypted columns, then, from your application, you may simply write out the secret key to its column. The database would then automatically handle encrypting on the way in, and also decrypting the secret key on the way out, when you go to read it.
just want to know, is there any way to make specific information on the blockchain can only be queried by one specific account?
More exactly, I am thinking let the user put their information on chain and give specific account access, so only that account can query that information from chain.
I checked ZK-SNARK, seems like this algorithm is only for verify the information is correct without knowing any detail of the information. Seems like it cannot be used in this case
Any raw data placed on a blockchain is available to everyone on the network. This is one of the fundamental requirements to ensure that multiple distributed and decentralized nodes are able to verify a shared state.
However, the data placed on chain does not need to be "transparent" to every user. You could, for example, encrypt some data and place it on the blockchain. Of course everyone will be able to see your encrypted data, but only with the decryption key would they be able to make sense of it.
With the assumption that the blockchain you use has some built in public key cryptography for account authentication, you could use the private key as your encryption/decryption key. Thus only "that account" would have acces to that file (...anyone who knows the private key that corresponds to that account).
However, all of this logic would need to exist "off-chain". If you submit a transaction with raw data, and expect the blockchain to do the encryption/decryption for you, anyone who runs a node would be able to see that transaction and your raw data. Thus it must be encrypted before it ever reaches the blockchain.
When writing an application based on Datomic and Clojure, it seems the peers have unrestricted access to the data. How do I build a multi-user system where user A cannot access data that is private to user B?
I know I can write the queries in Clojure such that only user A's private data is returned... but what prevents a malicious user from hacking the binaries to see user B's private data?
UPDATE
It seems the state of Clojure/Datomic applications is in fact lacking in
security based on the answer from #Thumbnail and the link to John P Hackworth's blog.
Let me state more clearly the problem I see, because I don't see any solution to this and it is the original problem that prompted this question.
Datomic has a data store, a transactor, and peers. The peers are on the user's computer and run the queries against data from the data store. My question is:
how to restrict access to data in the data store. Since the data store is dumb
and in fact just stores the data, I'm not sure how to provide access controls.
When AWS S3 is used as a data store the client (a peer) has to authenticate
before accessing S3, but once it is authenticated doesn't the peer have access
to all the data!? Limitted only be the queries that it runs, if the user wants
to get another user's data they can change the code, binary, in the client so
that the queries run with a different user name, right? To be clear... isn't
the access control just a condition on the query? Or are there user specific
connections that the data store recognizes and the data store limits what data
is visible?
What am I missing?
In a traditional web framework like Rails, the server side code restricts all
access to data and authenticates and authorizes the user. The user can change
the URLs or client side code but the server will not allow access to data unless
the user has provided the correct credentials.
Since the data store in Datomic is dumb, it seems it lacks the ability to
restrict access on a per user basis and the application (peer) must do this. I
do not want to trust the user to behave and not try to acquire other users'
information.
A simple example is a banking system. Of course the user will be
authenticated... but after that, what prevents them from modifying the client
side code/binary to change the data queries to get other users' account
information from the data store?
UPDATE - MODELS
Here are two possible models that I have of how Datomic and Clojure work... the first one is my current model (in my head).
user's computer runs client/peer that has the queries and complete access to the data store where user was authenticated before the client started thereby restricting users to those that we have credentials for.
user's computer has an interface (webapp) that interacts with a peer that resides on a server. The queries are on the server and cannot be modified by the user, thereby access controls are under access control themselves by the security of the server running the peer.
If the second model is the correct one, then my question is answered: the user cannot modify the server code and the server code contains the access controls... therefore, the "peers" which I thought resided on the user's computer actually reside on the application server.
Your second model is the correct one. Datomic is designed so that peers, transactor and storage all run within a trusted network boundary in software and on hardware you control. Your application servers run the peer library, and users interact with your application servers via some protocol like HTTP. Within your application, you should provide some level of user authentication and authorization. This is consistent with the security model of most applications written in frameworks like Rails (i.e. the end user doesn't require database permissions, but rather application permissions).
Datomic provides a number of very powerful abstractions to help you write your application-level auth(n|z) code. In particular, since transactions are first-class entities, Datomic provides the ability to annotate your transactions at write-time (http://docs.datomic.com/transactions.html) with arbitrary facts (e.g. the username of the person responsible for a given transaction, a set of groups, etc.). At read-time, you can filter a database value (http://docs.datomic.com/clojure/index.html#datomic.api/filter) so that only facts matching a given predicate will be returned from queries and other read operations on that database value. This allows you to keep authz concerns out of your query logic, and to layer your security in consistently.
As I understand it ... and that's far from completely ... please correct me if I'm wrong ...
The distinguishing feature of Datomic is that the query engine, or large parts of it, reside in the database client, not in the database server. Thus, as you surmise, any 'user' obtaining programmatic access to a database client can do what they like with any of the contents of the database.
On the other hand, the account system in the likes of Oracle constrains client access, so that a malicious user can only, so to speak, destroy their own data.
However, ...
Your application (the database client) need not (and b****y well better not!) provide open access to any client user. You need to authenticate and authorize your users. You can show the client user the code, but provided your application is secure, no malicious use can be made of this knowledge.
A further consideration is that Datomic can sit in front of a SQL database, to which constrained access can be constructed.
A web search turned up Chas. Emerick's Friend library for user authentication and authorization in Clojure. It also found John P Hackworth's level-headed assessment that Clojure web security is worse than you think.
You can use transaction functions to enforce access restrictions for your peers/users. You can put data that describes your policies into the db and use the transaction function(s) to enforce them. This moves the mechanism and policy into the transactor. Transactions that do not meet the criteria can either fail or simply result in no data being transacted.
Obviously you'll need some way to protect the policy data and transaction functions themselves.
I'm fleshing out an idea for a web service that will only allow requests from desktop applications (and desktop applications only) that have been registered with it. I can't really use a "secret key" for authentication because it would be really easy to discover and the applications that use the API would be deployed to many different machines that aren't controlled by the account holder.
How can I uniquely identify an application in a cross-platform way that doesn't make it incredibly easy for anyone to impersonate it?
You can't. As long as you put information in an uncontrolled place, you have to assume that information will be disseminated. Encryption doesn't really apply, because the only encryption-based approaches involve keeping a key on the client side.
The only real solution is to put the value of the service in the service itself, and make the desktop client be a low-value way to access that service. MMORPGs do this: you can download the games for free, but you need to sign up to play. The value is in the service, and the ability to connect to the service is controlled by the service (it authenticates players when they first connect).
Or, you just make it too much of a pain to break the security. For example, by putting a credential check at the start and end of every single method. And, because eventually someone will create a binary that patches out all of those checks, loading pieces of the application from the server. With credentials and timestamp checks in place, and using a different memory layout for each download.
You comment proposes a much simpler scenario. Companies have a much stronger incentive to protect access to the service, and there will be legal agreements in effect regarding your liability if they fail to protect access.
The simplest approach is what Amazon does: provide a secret key, and require all clients to encrypt with that secret key. Yes, rogue employees within those companies can walk away with the secret. So you give the company the option (or maybe require them) to change the key on a regular basis. Perhaps daily.
You can enhance that with an IP check on all accesses: each customer will provide you with a set of valid IP addresses. If someone walks out with the desktop software, they still can't use it.
Or, you can require that your service be proxied by the company. This is particularly useful if the service is only accessed from inside the corporate firewall.
Encrypt it (the secret key), hard-code it, and then obfuscate the program. Use HTTPS for the web-service, so that it is not caught by network sniffers.
Generate the key using hardware speciffic IDs - processor ID, MAC Address, etc. Think of a deterministic GUID.
You can then encrypt it and send it over the wire.