Managing key rotations with GCP_KMS with BYOK solution - google-cloud-platform

We have RSA key pairs generated on on-prem and plan to sync them to GCP-KMS. There is an yearly key rotation policy which would be done on on-prem and new key_versions would be synced to KMS. My concern is with the KMS API.
Problem: The API always asks for the 'key_version' as an argument to encrypt/decrypt a file.
Desired behaviour: During decryption, is it not possible that the KMS sees the certificate thumbprint and returns the appropriate key version to decrypt a given encrypted file? e.g. a DEK wrapped with the RSA_public when supplied to KMS gets decrypted by the RSA_Private(or KEK) of the correct version.
If yes, is there any documentation that elaborates on this use case?

According to the documentation, you can achieve that with symmetric signature (no key version specified), but you can't with the asymetricDecrypt (key version is required in the URL path of the API)

Related

How can I generate a HMAC key and secret key and share with client using AWS?

I am looking to generate a HMAC key and secret value as I want to use it as part of API request signatures. I want to be able to share the secret value and key with a 3rd party so I need access the value in plain text for one time. There would be a HMAC per 3rd party so the number could be large.
Option 1, I could generate this application side but I don't want to store in the dB and I was hoping to use a aws for storage but unsure what the process would be?
Option 2, Preferably I wanted to use AWS to generate the key and secret for HMAC as it can ensure uniqueness etc. I wanted it to provide the key and the secret one time. Looking at the documentation it seems to suggest that the secret value never leaves the HSM. Is my understanding correct or what is the best way to implement this using AWS?

Which is an appropriate way to encrypt message of size greater than 4KB?

I have been working to encrypt messages of size greater than 4Kb using AWS KMS. As I went through the AWS KMS documentation, the maximum size of a message that can be encrypted using AWS KMS is only 4Kb. I tried using both symmetric and asymmetric keys types to encrypt the message, but couldn't get the expected results. This is the error screenshot:
And I'm pretty sure that this error is due to my message being greater than 4Kb. I have the following constraints.
The encryption has to be on the frontend. This is creating a problem to use a symmetric approach because the key that I use may be easily seen to end users even the frontend code is minified.
I am searching for a lightweight approach so that I don't have many libraries and plugins added to the frontend code.
As I went through several articles what I found is if I use the asymmetric approach, there is always a limitation of message size that can be encrypted.
I was focusing on AWS KMS because I am using aws-sdk already in my front-end code and any solution with the same SDK won't increase my code size.
So, the possible alternatives I have found as per my study(not 100% sure) are:
Hybrid encryption(outside AWS): Use a symmetric key to encrypt the message and use an asymmetric key to encrypt the symmetric key.
Envelope encryption(with AWS)(Not sure how we can implement this)
Therefore, I am searching for references around AWS illustrating Envelope Encryption (with example if possible) or any other solutions satisfying the above-mentioned constraints.
If around AWS is not possible, any lightweight approaches(with practical implementation) that can be implemented on the frontend would also be highly appreciated.
Programming Language: Javascript
Yes, it is a good option to use envelope encryption. You can generate a random as a content encryption key(CEK). And use AWS KMS to generate a key encryption key(KEK).
Let's say you get a plain text M.
Then the encryption process should be like this:
EM = encrypt M with CEK
ECEK = encrypt CEK with KEK
Final text = ECEK.EM
the decryption process should be like this:
CEK = decrypt ECEK with KEK
M = decrypt EM with CEK
Just make sure the length of CEK is less than 4KB.

How to Use encryption and signing keys of GCP Shielded VM

I am wondering how signing key and encryption key of a gcp shielded VM instance can be used? I am thinking of using the encryption key (ekPub) to encrypt an arbitrary blob of data and be sure only the the associated gcp instance can decrypt it. But I am not sure how to ask vTPM to decrypt the encrypted data?
Shielded VM and Confidential computing are 2 different features on Google Cloud.
Shielded VM check at startup is any component has been tampered and can lead to a dataleak (through malware/backdoor)
Confidential Computing automatically create a cryptographic key at startup. This key is used to cipher all the data in memory. The data are only decipher inside the CPU, while processing.
When the data are written on disk, the data are get from encrypted memory, decipher in the CPU and written in plain text on the disk, which is automatically encrypted (but by another process, not by the CPU)
You have nothing to do, it's automatic!
Background and Definitions
The endorsement key (EK) is a key on TPM2.0 that is used for attestation. The EK typically comes with a certificate signed by the manufacturer (note, not available on GCE instances) stating that the TPM is a genuine TPM[1]. However, the TCG had privacy concerns around attestation with one signing key. So, they decided to make the endorsement key an encryption key. The ActivateCredential flow[2] is typically used to trust a new signing key. This sidesteps the privacy concerns by allowing the use of a privacy CA to create an AK cert endorsing that the EK and AK are on the same TPM. GCE creates an AK by default that allows users to avoid this process by using the get-shielded-identity API.
Decryption
There are a few ways to encrypt data using the endorsement key.
Since the EK is restricted [3], you have to jump through some hoops to easily use it. Restricted here means the key cannot be used for general decryption. Rather, they are used for storage/wrapping TPM objects. A storage key is typically a restricted decryption key.
Here are some ways you can get around this problem:
1. Use TPM2_Import and TPM2_Unseal (Part 3 of the TPM spec [3])
TPM2_Import has the TPM decrypt an external blob (public and private) with a storage key. Then, the user can load that object under the storage key and use. TPM2_Unseal returns the secret within the sealed blob.
The flow is roughly the following:
A remote entity creates a blob containing a private part and a corresponding public part. The private part contains the original secret to decrypt.
Remote entity uses an EK to wrap a seed for a known KDF that derives a symmetric and HMAC key.
Use seed and KDF derived key to encrypt the private part. This is the "duplicate" blob.
Send duplicate, public, and encrypted seed to the VM.
TPM2_Import on duplicate, public, and encrypted seed with handle for the EK.
TPM2_Load on public and outPrivate (decrypted private) from TPM2_Import.
TPM2_Unseal on the object handle, secret will be in outData.
This is all done for you in https://github.com/google/go-tpm-tools. All you need is to pass in the PEM, decode it, and parse it into a public key.
Then you can use server.CreateImportBlob.
Send the output blob to the VM.
On the client side, use EndorsementKeyRSA (or EndorsementKeyECC) to create a go-tpm-tools key.
Use key.Import with the blob.
Specifically, see https://pkg.go.dev/github.com/google/go-tpm-tools/server#CreateImportBlob and https://pkg.go.dev/github.com/google/go-tpm-tools/tpm2tools#Key.Import
Note package tpm2tools was recently renamed client, but this is not yet a public release.
2. Use TPM2_ActivateCredential (TPM spec, Part 3)
ActivateCredential allows you to verify a key is co-resident with another. Again, while this is typically used for attestation, you can use this to create an asymmetric key pair for general decryption.
In this scenario, the VM would generate an unrestricted decryption key on the TPM.
The server then generates the ActivateCredential challenge with the known templates of the EK and the decryption key.
If the decryption key's properties match, the TPM can fetch the challenge secret and return it to the server.
The server, upon receiving the successful response, can rely on the corresponding public key generated in the challenge and encrypt data to the VM.
One thing you may notice is, if you only want to decrypt a few times, you can just use the challenge secret as the plaintext.
You would need to stitch this together using https://pkg.go.dev/github.com/google/go-tpm/tpm2/credactivation and
https://pkg.go.dev/github.com/google/go-tpm/tpm2#ActivateCredential, as I don't currently know of tooling that supports this out of the box.
References
[1] EK specification: https://trustedcomputinggroup.org/resource/tcg-ek-credential-profile-for-tpm-family-2-0/
[2] Credential activation: https://github.com/google/go-attestation/blob/master/docs/credential-activation.md
[3] TPM spec: https://trustedcomputinggroup.org/resource/tpm-library-specification

aws s3 | bucket key enabled

S3 has recently announced "bucket_key_enabled" option to cache the kms key used to encrypt the bucket contents so that the number of calls to the kms server is reduced.
https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-key.html
So if that the bucket is configured with
server side encryption enabled by default
use a kms key "key/arn1" for the above
by selecting "enable bucket key", we are caching "key/arn1" so that each object in this bucket does not require a call to kms server (perhaps internally it has time-to-live etc but the crust is that, this key is cached and thus kms limit errors can be avoided)
Given all that, what is the point of overriding kms key at object level and still having this "bucket_key_enabled" set?
Eg :
bucket/ -> kms1 & bucket_key_enabled
bucket/prefix1 -> kms2 & bucket_key_enabled
Does s3 actually cache the object-key to kms-key map?
To give you the context, I currently have the application which publishes data to the following structure
bucket/user1
bucket/user2
While publishing to these buckets, it explicitly passed kms key assigned per user for each object upload.
bucket/user1/obj1 with kms-user-1
bucket/user1/obj2 with kms-user-1
bucket/user1/obj3 with kms-user-1
bucket/user2/obj1 with kms-user-2
bucket/user2/obj2 with kms-user-2
bucket/user2/obj3 with kms-user-2
if s3 is smart enough to reduce this to the following map,
bucket/user1 - kms-user-1
bucket/user2 - kms-user-2
All I have to do is, upgrade the sdk library to latest version and add a withBucketKeyEnabled(true) to the putObjectRequest in the s3Client wrapper we have.
Let me know how it works internally so that we can make use of this feature wisely.
I finally went with upgrading the sdk to latest version and passing withBucketKeyEnabled(true) to putObject API calls.
I was able to prove with cloud trail that the number of calls to kms server is the same regardless of encryption and bucketKeyEnabled set at bucket level or at "each" object level.
kms-key and bucketKeyEnabled=true at bucket level. No encryption option is passed at putObject() call
Calls made to GenerateDataKey() = 10
Calls made to Decrypt() = 60
No encryption settings at s3 bucket. For each putObject() call, I am passing kms-key and bucketKeyEnabled=true.
PutObjectRequest(bucketName, key, inputStream, objectMetadata)
.withSSEAwsKeyManagementParams(SSEAwsKeyManagementParams(keyArn))
.withBucketKeyEnabled<PutObjectRequest>(true)
Calls made to GenerateDataKey() = 10
Calls made to Decrypt() = 60
With this option disabled like below,
PutObjectRequest(bucketName, key, inputStream, objectMetadata)
.withSSEAwsKeyManagementParams(SSEAwsKeyManagementParams(keyArn))
Calls made to GenerateDataKey() = 10011
Calls made to Decrypt() = 10002
Thus I was able to conclude that bucketKeyEnabled works regardless of whether you set at the bucket level or object level. Although, I do not know how it is optimized for both access patterns internally

AWS Encryption SDK Header Mismatch between Regions

I'm using the Amazon Encryption SDK to encrypt data before storing it in a database. I'm also using Amazon KMS. As part of the encryption process, the SDK stores the Key Provider ID of the data key used to encrypt in the generated cipher-text header.
As described in the documentation here http://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/message-format.html#header-structure
The encryption operations in the AWS Encryption SDK return a single
data structure or message that contains the encrypted data
(ciphertext) and all encrypted data keys. To understand this data
structure, or to build libraries that read and write it, you need to
understand the message format.
The message format consists of at least two parts: a header and a
body. In some cases, the message format consists of a third part, a
footer.
The Key Provider ID value contains the Amazon Resource Name (ARN) of the AWS KMS customer master key (CMK).
Here is where the issue comes in. Right now I have two different KMS regions available for encryption. Each Key Provider ID has the exact same Encrypted Data Key value. So either key could be used to decrypt the data. However, the issue is with the ciphertext headers. Let's say I have KMS1 and KMS2. If I encrypt the data with the key provided by KMS1, then the Key Provider ID will be stored in the ciphertext header. If I attempt to decrypt the data with KMS2, even though the Encrypted Data Key is the same, the decryption will fail because the header does not contain the Key Provider for KMS2. It has the Key Provider ID for KMS1. It fails with this error:
com.amazonaws.encryptionsdk.exception.BadCiphertextException: Header integrity check failed.
at com.amazonaws.encryptionsdk.internal.DecryptionHandler.verifyHeaderIntegrity(DecryptionHandler.java:312) ~[application.jar:na]
at com.amazonaws.encryptionsdk.internal.DecryptionHandler.readHeaderFields(DecryptionHandler.java:389) ~[application.jar:na]
...
com.amazonaws.encryptionsdk.internal.DecryptionHandler.verifyHeaderIntegrity(DecryptionHandler.java:310) ~[application.jar:na]
... 16 common frames omitted
Caused by: javax.crypto.AEADBadTagException: Tag mismatch!
It fails to verify the header integrity and fails. This is not good, because I was planning to have multiple KMS's in case of one region KMS failing. We duplicate our data across all our regions, and we thought that we could use any KMS from the regions to decrypt as long as the encrypted data keys match. However, it looks like I'm locked into using only the original KMS that was encrypting the data? How on earth can we scale this to multiple regions if we can only rely on a single KMS?
I could include all the region master keys in the call to encrypt the data. That way, the headers would always match, although it would not reflect which KMS it's actually using. However, that's also not scalable, since we could add/remove regions in the future, and that would cause issues with all the data that's already encrypted.
Am I missing something? I've thought about this, and I want to solve this problem without crippling any integrity checks provided by the SDK/Encryption.
Update:
Based on a comment from #jarmod
Using an alias doesn't work either because we can only associate an alias to a key in the region, and it stores the resolved name of the key ARN it's pointing to anyway.
I'm reading this document and it says
Additionally, envelope encryption can help to design your application
for disaster recovery. You can move your encrypted data as-is between
Regions and only have to reencrypt the data keys with the
Region-specific CMKs
However, that's not accurate at all, because the encryption SDK will fail to decrypt on a different region because the Key Provider ID of the re-encrypted data keys will be totally different!
Apologies since I'm not familiar with Java programming, but I believe there is confusion how you are using the KMS CMKs to encrypt (or decrypt) the data using keys from more than one-region for DR.
When you use multiple master keys to encrypt plaintext, any one of the master keys can be used to decrypt the plaintext. Note that, only one master key (let's say MKey1) generates the plaintext data key which is used to encrypt the data. This plaintext data key is then encrypted by the other master key (MKey2) as well.
As a result, you will have encrypted data + encrypted data key (using MKey1) + encrypted data key (using MKey2).
If for some reason MKey1 is unavailable and you want to decrypt the ciphertext, SDK can be used to decrypt the encrypted data key using MKey2, which can decrypt the ciphertext.
So, yes, you have to specify multiple KMS CMK ARN in your program if you want to use multiple KMS. The document shared by you has an example as well which I'm sure you are aware of.