How do I store a private key using Google Cloud's Cloud KMS? - google-cloud-platform

I've been trying to read documentation, and watching a few of their videos but I'm not entirely clear on how I store a private key using GCP's Cloud KMS.
Is the idea for me to store the private key in storage, then use Cloud KMS to encrypt it? How can I make this key available as a secret to my application?
I know this is a very basic question, but I couldn't find an easy breakdown on how to do this - I'm looking for a simple explanation about the concept. Thanks!

Please read for yourself: https://cloud.google.com/kms/docs ...maybe you'll come up with a more focused question. And I think that there's a slight misunderstanding - you'd only be able to retrieve these on the server-side, but not client-side (else the client would need to have the RSA private key of the service account, which has access to Cloud KMS, which is a security breach by itself). So this is generally only useful for a) server-side applications and b) eg. Google Cloud Build.
Generally one has to:
create the keyring with gcloud kms keyrings create
create the key with gcloud kms keys create
then use gcloud kms encrypt and gcloud kms decrypt
I can also provide a usage example (it assumes a key-ring with a key).
Just to show, that one doesn't necessarily have to setup secrets.
gcloud kms can well provide build secrets - assuming that one can use a service-account with role roles/cloudkms.cryptoKeyEncrypterDecrypter. The given example decrypts all kinds of build secrets - without having to deal with any base64 encoded binary files in meta-data (which is rather a workaround than an actual solution).

This is a high level description of storing a private key on Google Cloud Platform (GCP):
I ended up using Google KMS, specifically asymmetric encryption feature.
The general steps to create an asymmetric key are:
Create a keyring within your project
Create a key with ENCRYPT_DECRYPT purpose (if like me, you're trying to do this using Terraform, there's some documentation here
Once we've created the key, we can now encrypt some data we want to secure using the public key from the asymmetric key we created in the previous step.
It is important to note that with an asymmetric key, there is a public-private key pair, and we never handle the private key (i.e. only GCP knows the private key).
Here's how you'd encrypt some data from your local computer:
echo -n my-secret-password | gcloud kms encrypt \
> --project my-project \
> --location us-central1 \
> --keyring my-key-ring \
> --key my-crypto-key \
> --plaintext-file - \
> --ciphertext-file - \
> | base64
This will output some cyphertext with base64 encoding, for eg:
CiQAqD+xX4SXOSziF4a8JYvq4spfAuWhhYSNul33H85HnVtNQW4SOgDu2UZ46dQCRFl5MF6ekabviN8xq+F+2035ZJ85B+xTYXqNf4mZs0RJitnWWuXlYQh6axnnJYu3kDU=
This cyphertext then needs to be stored as a secret. Once stored as a secret, we need to do the following in our application code in order to decrypt the cyphertext into a usable format.
Here is an example of decrypting the cyphertext using the #google-cloud/kms module: https://cloud.google.com/kms/docs/hsm#kms-decrypt-symmetric-nodejs
This is what it looks like in Nodejs:
//
// TODO(developer): Uncomment these variables before running the sample.
//
// const projectId = 'my-project';
// const locationId = 'us-east1';
// const keyRingId = 'my-key-ring';
// const keyId = 'my-key';
// Ciphertext must be either a Buffer object or a base-64 encoded string
// const ciphertext = Buffer.from('...');
// Imports the Cloud KMS library
const {KeyManagementServiceClient} = require('#google-cloud/kms');
// Instantiates a client
const client = new KeyManagementServiceClient();
// Build the key name
const keyName = client.cryptoKeyPath(projectId, locationId, keyRingId, keyId);
// Optional, but recommended: compute ciphertext's CRC32C.
const crc32c = require('fast-crc32c');
const ciphertextCrc32c = crc32c.calculate(ciphertext);
async function decryptSymmetric() {
const [decryptResponse] = await client.decrypt({
name: keyName,
ciphertext: ciphertext,
ciphertextCrc32c: {
value: ciphertextCrc32c,
},
});
// Optional, but recommended: perform integrity verification on decryptResponse.
// For more details on ensuring E2E in-transit integrity to and from Cloud KMS visit:
// https://cloud.google.com/kms/docs/data-integrity-guidelines
if (
crc32c.calculate(decryptResponse.plaintext) !==
Number(decryptResponse.plaintextCrc32c.value)
) {
throw new Error('Decrypt: response corrupted in-transit');
}
const plaintext = decryptResponse.plaintext.toString();
console.log(`Plaintext: ${plaintext}`);
return plaintext;
}
return decryptSymmetric();

Related

Multiple Firestore connection

I need help regarding tackling this scenario where I need to connect to multiple firestores in different google cloud projects.
Right now. I am using NestJs to retrieve data from my Firestore. Connecting to it using a JSON key generated from a Service Account.
I am planning to make this primary Firestore store data that would tell what database should the app connect to. However, I'm oblivious to how can I do the switching of service accounts/JSON keys. Since, from what I understood so far, is 1 JSON key is for 1 Firestore. I also think that it's not a good practice to store those JSON key files.
What are my possible options here?
You can use Secret Manager to store your Firestore configurations. To start:
Create a secret by navigating to Cloud Console > Secret Manager. You could also click this link.
You should enable the Secret Manager API if you haven't done so.
Click Create Secret.
Fill up the Name, for e.g. FIRESTORE.
On Secret value, you could either upload the JSON file or paste the Secret Value.
Click Create Secret.
After creating a secret, go to your project and install the #google-cloud/secret-manager:
npm i #google-cloud/secret-manager
then initiate it like this:
import {SecretManagerServiceClient} from '#google-cloud/secret-manager';
const client = new SecretManagerServiceClient();
You could now use the stored configuration on your project. See code below for reference:
import { initializeApp } from "firebase/app";
import * as functions from 'firebase-functions';
import { getFirestore, serverTimestamp, addDoc, collectionGroup, collection, query, where, getDoc, getDocs, doc, updateDoc, setDoc, arrayRemove, arrayUnion, onSnapshot, orderBy, limit, increment } from "firebase/firestore";
const client = new SecretManagerServiceClient();
// Must follow expected format: projects/*/secrets/*/versions/*
// You can always use `latest` if you want to use the latest uploaded version.
const name = 'projects/PROJECT-ID/secrets/FIRESTORE/versions/latest'
async function accessSecretVersion() {
const [version] = await client.accessSecretVersion({
name: name,
});
// Extract the payload as a string.
const payload = version?.payload?.data?.toString();
// WARNING: Do not print the secret in a production environment - this
const config = JSON.parse(payload);
const firebaseApp = initializeApp({
apiKey: config.apiKey,
authDomain: config.authDomain,
databaseURL: config.databaseURL,
projectId: config.projectId,
storageBucket: config.storageBucket,
messagingSenderId: config.messagingSenderId,
appId: config.appId,
measurementId: config.measurementId
});
const db = getFirestore(firebaseApp);
const docRef = doc(db, "cities", "SF");
const docSnap = await getDoc(docRef);
if (docSnap.exists()) {
console.log("Document data:", docSnap.data());
} else {
// doc.data() will be undefined in this case
console.log("No such document!");
}
}
accessSecretVersion();
You should also create Secrets on your different projects and make sure that each project's IAM permissions are set to access each other. You can easily choose/switch your Firestore by modifying the secret name here:
const name = 'projects/PROJECT-ID/secrets/FIRESTORE/versions/latest'
For convenience, you can identically name the secrets given that they are different projects. You can then just change the PROJECT-ID which you want to access the Firestore.
Creating and accessing secrets
Managing Secrets
Managing Secret Versions
API Reference Documentation
You may also want to checkout Secret Manager Best Practices.

aws s3 | bucket key enabled

S3 has recently announced "bucket_key_enabled" option to cache the kms key used to encrypt the bucket contents so that the number of calls to the kms server is reduced.
https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-key.html
So if that the bucket is configured with
server side encryption enabled by default
use a kms key "key/arn1" for the above
by selecting "enable bucket key", we are caching "key/arn1" so that each object in this bucket does not require a call to kms server (perhaps internally it has time-to-live etc but the crust is that, this key is cached and thus kms limit errors can be avoided)
Given all that, what is the point of overriding kms key at object level and still having this "bucket_key_enabled" set?
Eg :
bucket/ -> kms1 & bucket_key_enabled
bucket/prefix1 -> kms2 & bucket_key_enabled
Does s3 actually cache the object-key to kms-key map?
To give you the context, I currently have the application which publishes data to the following structure
bucket/user1
bucket/user2
While publishing to these buckets, it explicitly passed kms key assigned per user for each object upload.
bucket/user1/obj1 with kms-user-1
bucket/user1/obj2 with kms-user-1
bucket/user1/obj3 with kms-user-1
bucket/user2/obj1 with kms-user-2
bucket/user2/obj2 with kms-user-2
bucket/user2/obj3 with kms-user-2
if s3 is smart enough to reduce this to the following map,
bucket/user1 - kms-user-1
bucket/user2 - kms-user-2
All I have to do is, upgrade the sdk library to latest version and add a withBucketKeyEnabled(true) to the putObjectRequest in the s3Client wrapper we have.
Let me know how it works internally so that we can make use of this feature wisely.
I finally went with upgrading the sdk to latest version and passing withBucketKeyEnabled(true) to putObject API calls.
I was able to prove with cloud trail that the number of calls to kms server is the same regardless of encryption and bucketKeyEnabled set at bucket level or at "each" object level.
kms-key and bucketKeyEnabled=true at bucket level. No encryption option is passed at putObject() call
Calls made to GenerateDataKey() = 10
Calls made to Decrypt() = 60
No encryption settings at s3 bucket. For each putObject() call, I am passing kms-key and bucketKeyEnabled=true.
PutObjectRequest(bucketName, key, inputStream, objectMetadata)
.withSSEAwsKeyManagementParams(SSEAwsKeyManagementParams(keyArn))
.withBucketKeyEnabled<PutObjectRequest>(true)
Calls made to GenerateDataKey() = 10
Calls made to Decrypt() = 60
With this option disabled like below,
PutObjectRequest(bucketName, key, inputStream, objectMetadata)
.withSSEAwsKeyManagementParams(SSEAwsKeyManagementParams(keyArn))
Calls made to GenerateDataKey() = 10011
Calls made to Decrypt() = 10002
Thus I was able to conclude that bucketKeyEnabled works regardless of whether you set at the bucket level or object level. Although, I do not know how it is optimized for both access patterns internally

GCP - DLP - Decryption failed: the ciphertext is invalid error when using KMS wrapped key

While trying out on a POC with GCP DLP, facing the below issue:
log:
Received the following error message from Cloud KMS when unwrapping KmsWrappedCryptoKey
"projects/<<PROJECT_ID>>/locations/global/keyRings/<<KMS_KEY_RING>>/cryptoKeys
/<<KMS_KEY_NAME>>": Decryption failed: the ciphertext is invalid.
I have just created the key and key ring using the generate key option in KMS and a basic DLP template to Pseudoanaonymize the data with cryptographic deterministic token. The wrapped key I gave is a simple base-64 format key. When testing out this template in console with the data, I am facing this issue. The same issue is replicated in the application logs when trying to encrypt the data.
P.S: We have tried out generating a manual key using Open SSL and importing it into the KMS. We are still facing this issue.
Attaching a screen shot for reference :
Figured out the issue in this case.
The issue was with the way we created the wrapped key which we gave in the DLP template. Below are the steps to generate the wrapped key:
Choose the wrapped key (could be anything. A string, random text etc)
Encrypt the wrapped key in above step using the KMS key that you are going to use in the DLP template.
Convert the above encrypted key into base 64 format and use this in the DLP template.
Below are the commands for above steps in the same order:
openssl rand 16 > secret.txt
This generates random string of 16 bytes. The size had to be one of 16,24,32 (mandatory)
gcloud kms encrypt --location global --keyring <key-ring-name> --key \
<key-name> --plaintext-file secret.txt --ciphertext-file \
mysecret.txt.encrypted
This encrpts the random string.
base64 mysecret.txt.encrypted
Use this in the DLP template.
This answer helped me figure out the issue : https://stackoverflow.com/a/60513800/6908062

Getting InvalidCiphertextException with CiphertextBlob as String

I am trying to decrypt a string with AWS KMS, but I am getting an InvalidCiphertextException error (with no further information following the exception name).
I was originally decrypting in a node js lambda, using an environment variable as the source for encryptedString:
var params = {
CiphertextBlob: Buffer.from(encryptedString, 'base64')
};
kms.decrypt(params, function(err, data) {
if (err) {
...
} else {
...
}
}
I have also tried it with the CiphertextBlob value as a String, i.e.:
CiphertextBlob: encryptedString
The KMS key used to encrypt the value originally is a symmetric CMK so I believe I shouldn't need to pass in the key ID.
I also tried the same thing via awscli (passing in ciphertext-blob as a string) but got the same error:
aws kms decrypt --ciphertext-blob <encrypted string value> --query PlainText | base64 --decode
Passing in the key ID had no effect either.
I have used an online tool to validate that the encrypted string is base64. I'm not too clued up on base64 encoding so not sure if that's all it takes to prove the cipher text is valid.
I'm sure I'm failing with something fundamental - either my encrypted string is not base64 or not what decrypt expects, or I am missing some additional decrypt arguments perhaps.
Thanks in advance.
Based on the comments.
The issue is with decrypting SSM parameter. Thus, an encryption context must be provided during the decryption procedure. From docs:
Parameter Store includes this encryption context in calls to encrypt and decrypt the MyParameter parameter in an example AWS account and region.
"PARAMETER_ARN":"arn:aws:ssm:<REGION_NAME>:<ACCOUNT_ID>:parameter/<parameter-name>"
Therefore, if you are not using get_parameter with WithDecryption option set to True, you must provide the above encryption context during KMS decrypt operation.

Does AWS KMS use envelope encryption?

Encryption max data size allowed for AWS KMS is 4kb, so whenever we use encryption in AWS services/resources is Encryption done using envelope encryption? , i.e, data is encrypted at resource side itself with the key and key is encrypted with another key(cmk)and stored along with the data and decryption happens in the reverse order of above steps. Is my understanding correct??
Probably. It seems to be at least true for S3:
Server-side encryption protects data at rest. Amazon S3 encrypts each object with a unique key. As an additional safeguard, it encrypts the key itself with a master key that it rotates regularly. Amazon S3 server-side encryption uses one of the strongest block ciphers available to encrypt your data, 256-bit Advanced Encryption Standard (AES-256).
Generally the CMK is not used for encrypting the data that you are looking to encrypt.
Whilst it’s a matter of opinion on the 4kb limit, data encryption keys provide a more secure approach to encrypting the data.
Because each resource could have its own data encryption key, the risk is reduced of having all of your resources decrypted if a single encryption key is compromised (in fact if this happens KMS supports re encryption to generate a new data key).
What you describe is correct for S3 implementation of KMS. A Base64 encoded encrypted key is stored alongside the object it encrypts. To decrypt S3 needs to decrypt the data key for the object using the CMK, then use the decrypted data encryption key to decrypt the object.
Other services will have different implementations, for example DynamoDB does this on a per table basis.
For more information on how each service has implemented KMS take a look at the How AWS Services use AWS KMS page
Aws kms does not store any data it provide you two keys
1 plain key : with the help of it you encrypt the data and delete it(key)(no need to save anywhere).
2.encrypted data key :- you need to save this key to decrypt the data( to decrypt the data first you got plain key from aws using encrypted data key) and with the help of plain key you decrypt the data.
Note you need aws kms credentials like :-
a)serviceEndPoint b)awsKeyForKMS c)kmsConfig
KMS Encryption and Decryption in asp.net mvc
Name space need to add from nuget packeg
using Amazon.KeyManagementService;
using Amazon.KeyManagementService.Model;
**1) Encryption :-**
AmazonKeyManagementServiceConfig kmsConfig = new AmazonKeyManagementServiceConfig();
kmsConfig.UseHttp = true;
kmsConfig.ServiceURL = serviceEndPoint;
//create client, specify Region end point or kms config
AmazonKeyManagementServiceClient kmsClient = new AmazonKeyManagementServiceClient(awsKeyForKMS, awsSecretKeyForKMS, kmsConfig);
GenerateDataKeyRequest dataKeyReq = new GenerateDataKeyRequest();
dataKeyReq.KeyId = keyARNForKMS;
dataKeyReq.KeySpec = DataKeySpec.AES_256;//The length of the data encryption key. AES_256 to generate a 256-bit symmetric key.
GenerateDataKeyResponse dataKeyResponse = kmsClient.GenerateDataKey(dataKeyReq);
//read encrypted data key from memory
MemoryStream streamCipherText = dataKeyResponse.CiphertextBlob;
// need to save this key with encrypted data because with the help of it
// you can decrypt(you got plaindatakey) the data
encryptedDataKey = Convert.ToBase64String(streamCipherText.ToArray());
//read plain data key from memory
MemoryStream streamPlainText = dataKeyResponse.Plaintext;
// use this key to encrypt your data and than forgot this key
plainDataKey = Convert.ToBase64String(streamPlainText.ToArray());
//your encryption logic
Encryption encrypt = new Encryption();
encrypt.EncryptTextForKms(PlainKey, "data to be encrypted")
**2.Decryption Data:-**
AmazonKeyManagementServiceConfig kmsConfig = new AmazonKeyManagementServiceConfig();
kmsConfig.UseHttp = true;
kmsConfig.ServiceURL = serviceEndPoint;
//create client, specify Region end point or kms config
AmazonKeyManagementServiceClient kmsClient = new AmazonKeyManagementServiceClient(awsKeyForKMS, awsSecretKeyForKMS, kmsConfig);
DecryptRequest decryptRequest = new DecryptRequest();
// use hare above created encrypteddatakey to get plaindatakey
MemoryStream streamEncryptedDataKey = new MemoryStream(Convert.FromBase64String(encryptedDataKey));//convert to stream object
decryptRequest.CiphertextBlob = streamEncryptedDataKey;
DecryptResponse decryptResp = kmsClient.Decrypt(decryptRequest);
plainDataKey = Convert.ToBase64String(decryptResp.Plaintext.ToArray());
// your decryption logic
DecryptTexts("encrypted data", PlainKey)