How to supply a key on the command line that's not Base 64 encoded - amazon-web-services

Regarding the AWS S3 tool "sync" and a "customer-provided encryption key", it says here,
--sse-c-key (string) The customer-provided encryption key to use to server-side encrypt the object in S3. If you provide this value,
--sse-c be specfied as well. The key provided should not be base64 encoded.
How does one supply a key on the command line that is not base64 encoded?
If the key is not base64 encoded, then surely some of the key's bytes would not be expressible as characters?

At first glance, this seems like a HUGE oversight in the aws cli. However, buried deep in the CLI documentation is a blurb on how to provide binary data on the command line.
https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-parameters-file.html
(updated link per #Chris's comment)
This did in fact work for me...
aws s3 cp --sse-c AES256 --sse-c-key fileb://key.bin large_file s3://mybucket/
The fileb:// part is the answer

Related

AWS S3api put-object: unknown options (checksum-crc32)

So I want to upload a file and have AWS perform a specified CRC32 (let's say the CRC is ABCD1234) check after the upload, but i keep getting this error.
usage: aws [options] [ ...]
[parameters] To see help text, you can run:
aws help aws help aws help
Unknown options: --checksumcrc32, ABCD1234
The command I use goes as follows (brackets [] for variables)
aws s3api put-object --bucket [BUCKET_NAME] --checksum-crc32
"ABCD1234" --key [NAME_OF_FILE] --body [DESTINATION_PATH] --profile
[PROFILE_NAME]
Uploads without the --checksum-crc32 work just fine.
Version: aws-cli/2.4.4
Any guesses why I get this error?
Thanks in advance!
The documentation says that the CRC needs to be Base-64 encoded, not hexadecimal:
--checksum-crc32 (string)
This header can be used as a data integrity check to verify that the
data received is the same data that was originally sent. This header
specifies the base64-encoded, 32-bit CRC32 checksum of the object. For
more information, see Checking object integrity in the Amazon S3 User
Guide .
So your ABCD1234 would need to be either q80SNA== or NBLNqw==, depending on whether they expect the 32 bits to be rendered in big-endian or little-endian order, respectively. I didn't see anything in the documentation that says which it is.
The CRC32 doesn't match their calculation. Make sure you're encoding it properly.
You don't need to specify the checksum on the cli, you can have the client calculate it by removing --checksum-crc32 and replacing it with --checksum-algorithm "crc32"
If your goal is data integrity, consider a cryptographically secure algorithm like SHA256, which can also be automatically calculated by the cli.

Managing key rotations with GCP_KMS with BYOK solution

We have RSA key pairs generated on on-prem and plan to sync them to GCP-KMS. There is an yearly key rotation policy which would be done on on-prem and new key_versions would be synced to KMS. My concern is with the KMS API.
Problem: The API always asks for the 'key_version' as an argument to encrypt/decrypt a file.
Desired behaviour: During decryption, is it not possible that the KMS sees the certificate thumbprint and returns the appropriate key version to decrypt a given encrypted file? e.g. a DEK wrapped with the RSA_public when supplied to KMS gets decrypted by the RSA_Private(or KEK) of the correct version.
If yes, is there any documentation that elaborates on this use case?
According to the documentation, you can achieve that with symmetric signature (no key version specified), but you can't with the asymetricDecrypt (key version is required in the URL path of the API)

GCP - DLP - Decryption failed: the ciphertext is invalid error when using KMS wrapped key

While trying out on a POC with GCP DLP, facing the below issue:
log:
Received the following error message from Cloud KMS when unwrapping KmsWrappedCryptoKey
"projects/<<PROJECT_ID>>/locations/global/keyRings/<<KMS_KEY_RING>>/cryptoKeys
/<<KMS_KEY_NAME>>": Decryption failed: the ciphertext is invalid.
I have just created the key and key ring using the generate key option in KMS and a basic DLP template to Pseudoanaonymize the data with cryptographic deterministic token. The wrapped key I gave is a simple base-64 format key. When testing out this template in console with the data, I am facing this issue. The same issue is replicated in the application logs when trying to encrypt the data.
P.S: We have tried out generating a manual key using Open SSL and importing it into the KMS. We are still facing this issue.
Attaching a screen shot for reference :
Figured out the issue in this case.
The issue was with the way we created the wrapped key which we gave in the DLP template. Below are the steps to generate the wrapped key:
Choose the wrapped key (could be anything. A string, random text etc)
Encrypt the wrapped key in above step using the KMS key that you are going to use in the DLP template.
Convert the above encrypted key into base 64 format and use this in the DLP template.
Below are the commands for above steps in the same order:
openssl rand 16 > secret.txt
This generates random string of 16 bytes. The size had to be one of 16,24,32 (mandatory)
gcloud kms encrypt --location global --keyring <key-ring-name> --key \
<key-name> --plaintext-file secret.txt --ciphertext-file \
mysecret.txt.encrypted
This encrpts the random string.
base64 mysecret.txt.encrypted
Use this in the DLP template.
This answer helped me figure out the issue : https://stackoverflow.com/a/60513800/6908062

Amazon S3 SSE-C encryption of file already on S3

I have an application running since many time that uploads files (images) on S3 storage.
Now I've been requested to update this application and upload file using SSE-C encryption (Server Side Encryption with Customer provided key). So I did it.
I'm also able to upload SSE-C encrypted files using aws cli.
What I need now, and here is my question, is to find a way to apply SSE-C encryption to earlier files already on S3 without SSE-C encryption.
Could someone explain me if and how this can be accomplished or point me to some doc or support page in order to find a solution?
One (maybe inefficient) way I found is doing the following for each file:
copy filename to filename.encrypted applying the SSE-C encryption
move filename.encrypted to filename
Is this the only way to do it or there is a better one?
NOTES:
Since I have many many files I obviously excluded the option to download the file and then upload again with SSE-C encryption because it'll be too slow and too expensive.
A solution that let apply the SSE-C without data transfert from and back to S3 is the one I'm looking for.
Thank you very much for any feedback on this.
You can apply encryption to already-existing objects by simply copying the object on top of itself:
aws s3 cp s3://bucket/foo.txt s3://bucket/foo.txt --sse-c --sse-c-key fileb://key.bin
This works as long as something (eg the encryption) is changing.
I got the --sse-c syntax from: How to supply a key on the command line that's not Base 64 encoded

Understanding the contents of the "continuation-token" request parameter of the "GET Bucket (List Objects) Version 2" command in S3 API

Does anyone know how to decode the contents provided by amazon S3 in this field of the response? It looks like a hashed string, but I need to understand what does it contain in it? Does it use some commonly used hashing?
It looks like a hash, but the documentation says it's obfuscated, maybe it's salted.
NextContinuationToken is sent when isTruncated is true which means there are more keys in the bucket that can be listed. The next list requests to Amazon S3 can be continued with this NextContinuationToken. NextContinuationToken is obfuscated and is not a real key.
Example: 17z4MXisD8tDT/0+uf1UndaqeI3+K7vG8bso1NFBtbPq2gKRaS2Ax6
ioTgsamg0QOZt3V56dV4r0=