PCI Compliance - Is it necessary to encrypt expiration date if a token is received instead of actual account number? Should the token be encrypted? - pci-compliance

In our setup, we are receiving a token for the primary account number (PAN), expiration date, and some other information. We need to save this information into our database. Is it necessary/recommended to encrypt the expiration date if the we are not getting the actual PAN? Does the token itself need to be encrypted?

The token is already meaningless, so encrypting it seems an overkill.
Edit
Just looked up the PCI DSS 3.0 Quick Ref. and here's what it says:
Merchants and any other service providers involved with payment card
processing must never store sensitive authentication data after
authorization. This includes sensitive data that is printed on a card,
or stored on a card’s magnetic stripe or chip – and personal
identification numbers entered by the cardholder.
Looks like you need to discard the info rather than encrypt it.

According to the PCI/DSS standards encryption actually has almost no value as it doesn't take anything out of scope. Not to say you shouldn't be encrypting data, but doing so really has virtually no effect on the PCI/DSS question.

Related

How to control access to personnal information stored on blockchain (with an educational use case)

The following use case would it be possible ?
At a nation level, the government wants its regional educational direction to build a system to certify diplomas. Those diplomas should be stored on a blockchain in such a way that no region could, alone, temper them after they were issued.
The students should be able to give access temporally to his or her diplomas, to anyone (eg. employer wanting to recruit).
Please correct me:
I think this should be possible if the data stored on the blockchain was encrypted and if the DAPP was able to generate temporary key to decrypt that data.
Obviously any employer gaining access to the record could make a copy of it, but the point here is that after the expiration of the key's validity, no employer should be able to prove that he owned the real record.
Does that sound like a valid use case for dapps in general. Does it sound even feasible to you ?
The following scenario can be suggested as the simplest option:
We create a smart contract with 3 methods:
RegistryRequest(bytes32 info_id, bytes32 user_id, string memory public_cert) payeble
SendInfo(bytes32 info_id, bytes32 user_id, string file_addr)
GetInfo(bytes32 info_id, bytes32 user_id) view return(string memory retVal)
The consumer calls the method RegistryRequest transfers:
info_id - the identifier of the required data
user_id - his unique identifier (e-mail, mobile phone, and so on) ()
public_cert- his public key OpenSSL
and attaches a certain amount in Eth to the transaction as payment for the service.
Having received the details and payment from the consumer, you:
create a file with the data he needs
encrypt this file with the consumer's public key OpenSSL,
upload it to some web resource or transfer it via IPFS or Ethereum Swarm (or in any other way)
using the method SendInfo lay out the "address" of the data file (file_addr) in relation to the data and consumer identifiers (info_id, user_id).
To pay for the transaction, you use a portion of the amount received from the consumer along with the RegistryRequest.
The consumer through the method GetInfo using the data identifier (info_id) and his personal identifier (user_id) receives the "address" of the data file, extracts and decrypts it
If the data changes, then their changed state is laid out similarly to point 3
After the expiration of the data provision period, you stop releasing their update
More complex solutions can, for example, be discussed with experts for free and simulated on a kekker.com

Should I store failed login attempts in AWS Cognito or Dynamo DB?

I have a requirement to build a basic "3 failed login attempts and your account gets locked" functionality. The project uses AWS Cognito for Authentication, and the Cognito PreAuth and PostAuth triggers to run a Lambda function look like they will help here.
So the basic flow is to increment a counter in the PreAuth lambda, check it and block login there, or reset the counter in the PostAuth lambda (so successful logins dont end up locking the user out). Essentially it boils down to:
PreAuth Lambda
if failed-login-count > LIMIT:
block login
else:
increment failed-login-count
PostAuth Lambda
reset failed-login-count to zero
Now at the moment I am using a dedicated DynamoDB table to store the failed-login-count for a given user. This seems to work fine for now.
Then I figured it'd be neater to use a custom attribute in Cognito (using CognitoIdentityServiceProvider.adminUpdateUserAttributes) so I could throw away the DynamoDB table.
However reading https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-dg.pdf the section titled "Configuring User Pool Attributes" states:
Attributes are pieces of information that help you identify individual users, such as name, email, and phone number. Not all information about your users should be stored in attributes. For example, user data that changes frequently, such as usage statistics or game scores, should be kept in a separate data store, such as Amazon Cognito Sync or Amazon DynamoDB.
Given that the counter will change on every single login attempt, the docs would seem to indicate I shouldn't do this...
But can anyone tell me why? Or if there would be some negative consequence of doing so?
As far as I can see, Cognito billing is purely based on storage (i.e. number of users), and not operations, whereas Dynamo charges for read/write/storage.
Could it simply be AWS not wanting people to abuse Cognito as a storage mechanism? Or am I being daft?
We are dealing with similar problem and main reason why we have decided to store extra attributes in DB is that Cognito has quotas for all the actions and "AdminUpdateUserAttributes" is limited to 25 per second.
More information here:
https://docs.aws.amazon.com/cognito/latest/developerguide/limits.html
So if you have a pool with 100k or more it can create a bottle neck if wanted to update a Cognito user records with every login etc.
Cognito UserAttributes are meant to store information about the users. This information can then be read from the client using the AWS Cognito SDK, or just by decoding the idToken on the client-side. Every custom attribute you add will be visible on the client-side.
Another downside of custom attributes is that:
You only have 25 values to set
They cannot be removed or changed once added to the user pool.
I have personally used custom attributes and the interface to manipulate them is not excellent. But that is just a personal thought.
If you want to store this information, and not depend on DynamoDB, you can use Amazon Cognito Sync. Besides the service, it offers a client with great features that you can incorporate to your app.
AWS DynamoDb appears to be your best option, it is commonly used for such use cases. Some of the benefits of using it:
You can store separate record for each login attempt with as much info as you want such as ip address, location, user-agent etc. You can also add datetime that can be used by pre-auth Lambda to query by time range for example failed attempt within last 30 minutes
You don't need to manage table because you can set TTL for DynamoDb record so that record will be deleted automatically after specified time.
You can also archive items in S3

Securing REST API with hashed signature

I've asked a question related to this one here:
Securely Passing UserID from ASP.Net to Javascript
However now I have a more detailed/specific question. I have the service and I have the application that is going to consume the service my plan to secure it, is to generate a hash based on some values, a nonce, and a secret key. My only issue is that it seems that in order to verify the hash I will have to send all of the values plus the nonce, except the secret key. Is this a flaw in my design or is this how such things are done? I have googled around and haven't been able to find out if this is the right and secure way to do this.
For example lets say I need to pass values 1,2, and 3 to my rest service, so I users phone number, the nonce, and, the secret key to generate a hash, now in order to generate the hash again I would need to pass all of the above except the key (which I can retrieve based on the users phone number).
I am totally leaving my service up for attack, securing it properly, or somewhere in between?
EDIT: made a spelling and grammar correction
EDIT 2: Finally came to to a satisfactory solution by using MVC 4 with forms authentication, identical cookie names between two projects, and making use of a globally applied [Authorize] attribute
There is nothing inherently wrong with this plan. If the client sends:
data . nonce . hash(data . nonce . shared-secret)
Then the server verifies the message by checking that hash(data . nonce . shared-secret) matches the hash provided by the client, you would be safe against both tampering and replay (assuming, of course, that you're using a reasonable cryptographic hashing algorithm).
Under this design, the client can even generate its own nonces, provided there is no risk that two clients will generate the same nonce.
However, eavesdroppers will still be able to see all the data you send… So, unless there is a very good reason not to, I would simply use https (which, unless there are other requirements I'm unaware of, be entirely sufficient).

Improve my Shared Secret Algorithm/Methodology & suggest a Encryption Protocol

I am looking for protocol/algorithm that will allow me to use a shared secret between my App & a HTML page.
The shared secret is designed to ensure only people who have the app can access the webpage.
My Problem: I do not know what algorithm(my methodology to validate a valid access to the HTML page) & what encryption protocol I should use for this.
People have suggested to me that I use HMAC SHAXXX or DES or AES, I am unsure which I should use - do you have any suggestions?
My algorithm is like so:
I create a shared secret that the App & the HTML page know of(lets call it "MySecret"). To ensure that that shared secret is always unique I will add the current date & minute to the end of the secret then hash it using XXX algorithm/protocol(HMAC/AES/DES). So the unencrypted secret will be "MySecret08/17/2011-11-11" & lets say the hash of that is "xyz"
I then add this hash to the url CGI: http://mysite.com/comp.py?sharedSecret=xyz
The comp.py script then uses the same shared secret & date combination, hashes it, then checks that the resulting hash is the same as the CGI variable sharedSecret("xyz"). If it is then I know a valid user is accessing the webpage.
Can you think of a better methodology to ensure on valid people can access my webpage(the webpage allows the user to enter a competition)?
I think I am on the correct track using a shared secret but my methodology for validating the secret seems flawed especially if the hash algorithm doesn't produce the same result for the same in put all the time.
especially if the hash algorithm doesn't produce the same result for the same in put all the time.
Then the hash is broken. Why wouldn't it?
You want HMAC in the simple case. You are "signing" your request using the shared secret, and the signature is verified by the server. Note that the HMAC should include more data to prevent replay attacks - in fact it should include all query parameters (in a specified order), along with a serial number to prevent the replay of the same message by an eavesdropper. If all you are verifying is the shared secret, anyone overhearing the message can continue to use this shared secret until it expires. By including a serial number, or a short validity range, you can configure the server to flag that.
Note that this is still imperfect. TLS supports client and server side certificate support - why not use that?
The looks like it would work. Clock drift could be a problem, you may need to validate a range of, say, +/- 3 minutes if it fails for the exact time.
flawed especially if the hash algorithm doesn't produce the same result for the same input all the time
Well, that would be a broken hash algorithm then. A hash reliable produces the same output for the same input every time (and almost always a different output for a different input).
Try using some sort of network encryption. Your web server should be able to handle that type of authentication automatically. All that remains is for you to write it into your app (which you have to do anyway). Depending on your app platform, you may be able to do that automatically as well.
Google these: Kerberos, SPNEGO and HTTP 401 Authorization Required. You may be able to get away with simple hard-coded user name and password HTTP headers and run your connections over HTTPS. That way you have less custom code on your server and your server takes care of authenticating your requests for you. Not to mention you are taking advantage of some additional features of HTTP.

Multiuser access to encrypted data

I'm building a server-side application which requires the data the be stored encrypted in the database. When a client accesses the data, it also has to be transferred encrypted. The clients each has a unique login.
My original idea to do this, is to store the data encrypted with a symmetric-algorithm like AES. So when a client wants to access the data the encrypted data is transferred to the client, while the key is encrypted with the public key from the client.
Is this a secure way to do store and transfer the data or is there a better solution to this problem?
Update: If following Søren's suggestion to keep a copy of the AES key encrypted using each client's public key, wouldn't that include the key to be stored somewhere in order to add additional clients or could that be generated in any way?
First you should start by defining some security properties you want to provide, for example:
Is it ok to give different users access to the same secret key? Aka if File1 is AES encrypted with key K, is it a problem if user Alice and user Bob both are given K.
How do I revoke users from the system? (It turns out Bob from scenario 1 is actually a Chinese spy working for our company, how do I securely kick him out of the system).
Does the encrypted data that is saved in the database need to be searched? (This problem is well researched and hard to solve!)
How much (if any) and what plaintext data will be placed into the database to help organize it? Databases expect data to have unique keys associated with them. You need to make sure these keys don't leak information, but are useful enough to retrieve the data later.
How often should secret keys be changed? If you are storing files and multiple users are allowed access to encrypted files, what happens when user X modifies a file? Does the secret key change? Should the new key be sent to all users?
What happens when 2 users modify the same data at the same time? Will the database be able to handle this without modification?
There are many others.
If the server is not trusted and must never see plaintext data, then here's a general overview of a possible solution.
Let the clients managed the crypto completely. Clients authenticate with the server and are allowed to store data into the database. It is the responsibility of the client to make sure the data is encrypted.
In this scenario, keys should be saved securely only on the clients computer. If they must be placed elsewhere, a "master key" could be created.
Secure from what? You need to define your goals more clearly.
The solution would protect the data during transfer, but from your description, the server would have full access to the data (since it'd need to store the AES key unencrypted). In other words, a hacker or burglar with access to the server would have full access to the data.
If secure transmission is what you want, use an SSL / TLS wrapper around the database connection. This is a standard solution from all major vendors.
To secure the data server side, the server should not have the AES key. If the number of clients were limited, the server could store a copy of the AES key for every client, each copy of the key already encrypted with the public key of each client, such that the server never sees the plain text data nor any unencrypted AES keys.
That is indeed the common approach, e.g. also used by NTFS file encryption.