How is "Encryption key SHA256" computed on Google Cloud Storage? - google-cloud-platform

I generated an AES key and used gsutil to upload a client-side encrypted file to GCP storage. The file is shown to be client-encrypted and the metadata contains an "Encryption key Sha256" that appears to be base 64.
When I calculate the sha256sum of my key string and convert to base 64, it does not match. How is this value computed?

The hash is computed on the base64 decoded key string.

Try using this command:
export encryption_key=yxCPz7MD1bLjcBJXiXAlu6obBAAn2leIebkTtnxtB+U=
echo "${encryption_key}" | base64 -d | openssl dgst -binary -sha256 | base64
Reference:
https://unix.stackexchange.com/questions/3675/how-can-i-get-a-base64-encoded-shax-on-the-cli?newreg=54d06c85faaf4d739723334c6f9e13d3

Related

Ceph/Rados AWS S3 API Bucket Policy via CURL

I am currently struggling with a problem I am having with rest calls to an AWS s3 API hosted by a rados/ceph gateway.
For reasons I wont go into, I can't use an SDK that is provided to talk to it, which would solve all of my woes - I'm recreating some of the more simple jobs I need via CURL - which in the most part work, I can make buckets, delete them, add objects, create roles but my newest problem is bucket policies, both GET for them and PUT. I receive a 403 every time and I cannot figure out why.
What I have attempted to do is use another box with an SDK that talks to the API (boto3) and the AWS s3API calls to do the same thing and they work perfectly fine with the users Access and Secret key, so I do not think its an account thing.
Using the logs from the SDK jobs, I have attempted to recreate everything that is being sent, headers, payload etc...
Now I can only think that as a 403 maybe its the Auth4 strategy but .... this strategy works for every other job I need to do.
Code:
#!/bin/bash
set -x
ACCESS_KEY="accesskey"
SECRET_KEY="secret"
SERVICE="s3"
REGION="default"
ENDPOINT="s3-test.example.com"
BUCKET="bucketname"
PATH="?policy"
TIMEDATE="$(date -u '+%Y%m%d')"
TIMEDATEISO="${TIMEDATE}T$(date -u '+%H%M%S')Z"
# Create sha256 hash in hex
function hash_sha256 {
printf "${1}" | openssl dgst -sha256 | sed 's/^.* //'
}
# Create sha256 hmac in hex
function hmac_sha256 {
printf "{2}" | openssl dgst -sha256 -mac HMAC -macopt "${1}" | sed 's/^.* //'
}
PAYLOAD="$(printf "" | openssl dgst -sha256 | sed 's/^.* //')"
CANONICAL_URI="/${BUCKET}${PATH}"
CANONICAL_HEADERS="host:${ENDPOINT}
x-amz-content-sha256:${PAYLOAD}
x-amz-date:${TIMEDATEISO}"
SIGNED_HEADERS="host;x-amz-content-sha256;x-amz-date"
CANONICAL_REQUEST="GET
${CANONICAL_URI}\n
${CANONICAL_HEADERS}\n
${SIGNED_HEADERS}\n
${PAYLOAD}"
# Create signature
function create_signature {
stringToSign="AWS4-HMAC-SHA256\n${TIMEDATEISO}\n${TIMEDATE}/${REGION}/${SERVICE}/aws_request\n$(hash_sha256 "${CANONICAL_REQUEST}")"
dateKey=$(hmac_sha256 key:"AWS4${SECRET_KEY}" "${TIMEDATE}")
regionKey=$(hmac_sha256 hexkey:"${dateKey}" "${REGION}")
serviceKey=$(hmac_sha256 hexkey:"${regionKey}" "${SERVICE}")
signingKey=$(hmac_sha256 hexkey:"${serviceKey}" "aws4_request")
printf "${stringToSign}" | openssl dgst -sha256 -mac HMAC -macopt hexkey:"${signingKey}" | sed 's/(stdin)= //'
}
SIGNATURE="${create_signature}"
AUTH_HEADER="\
AWS4-HMAC-SHA256 Credential=${ACCESS_KEY}/${TIMEDATE}/\
${REGION}/${SERVICE}/aws4_request, \
SignedHeaders=${SIGNED_HEADERS}, Signature=${SIGNATURE}"
curl -vvv "https://${ENDPOINT}${CANONICAL_URI}" \
-H "Accept:" \
-H "Authorization: ${AUTH_HEADER}" \
-H "x-amz-content-sha256: ${PAYLOAD}" \
-H "x-amz-date: ${TIMEDATEISO}" \
Any help or pointers would be massively appreciated - had to transpose this by hand, so if there is an obvious typo... I will fix and rerun.
As I say, would love to use an SDK but from the appliance I will be using to do these requests, its just not possible
Managed to solve my own issue.
Noticed in the ceph logs (not sure how I missed it first time round) that the signature from my client didnt match how the ceph radosgw was signing the same signature.
Took it back to task on the canonicalRequest and for some reason if i take out all the line breaks (\n), it calculates.... But all of my other jobs like updating roles, adding buckets etc... fail as they need the line breaks. Not sure why, some Ceph weirdry?
I did packet captures and stripped the SSL to see what both requests from a working SDK and my curl were sending and it was identical...
Oh well, working :)

(gcloud.kms.encrypt) Failed to read plaintext file

I want to encrypt a key file with the gcloud command line tool.
The command I am running is:
gcloud kms encrypt --project=pname --location=global --keyring=keyring \
--key=key-credential \
--plaintext-file=/Users/macuser/Desktop/test/keys/testkey.json.decrypted \
--ciphertext-file=testkey.json.encrypted
but I keep getting the error
ERROR: (gcloud.kms.encrypt) Failed to read plaintext file
[/Users/macuser/Desktop/test/keys/testkey.json.decrypted]:
Unable to read file [/Users/macuser/Desktop/test/keys/testkey.json.decrypted]:
[Errno 2] No such file or directory:
'/Users/macuser/Desktop/test/keys/testkey.json.decrypted'
The file
/Users/macuser/Desktop/test/keys/testkey.json.decrypted
exists. I tried it with absolute and relative path, and with and without quotes, but I keep getting the same error.
Why is gcloud not seeing the file?
Ensure the user that is calling the encrypt and decrypt methods has the cloudkms.cryptoKeyVersions.useToEncrypt and cloudkms.cryptoKeyVersions.useToDecrypt permissions on the key used to encrypt or decrypt.
One way to permit a user to encrypt or decrypt is to add the user to the roles/cloudkms.cryptoKeyEncrypter, roles/cloudkms.cryptoKeyDecrypter, or roles/cloudkms.cryptoKeyEncrypterDecrypter IAM roles for that key.
To use Cloud KMS on the command line, first Install or upgrade to the latest version of Cloud SDK.
gcloud kms encrypt \
--key key \
--keyring key-ring \
--location location \
--plaintext-file file-with-data-to-encrypt \
--ciphertext-file file-to-store-encrypted-data
Replace key with the name of the key to use for encryption.
Replace key-ring with the name of the key ring where the key is located.
Replace location with the Cloud KMS location the key ring.
Replace file-with-data-to-encrypt and file-to-store-encrypted-data with the local file paths for reading the plaintext data and saving the encrypted output.

How to sign an ova file?

I am trying to sign an ova file by following this link. I also generated the manifest file but I don't see any information in the signed file. Here is the command I am using:
openssl req -x509 -nodes -sha256 -days 365 -newkey rsa:1024 -keyout myself.pem -out myself.pem
openssl sha1 *.ova > myself.mf
ovftool --privateKey=myself.pem sample.ova sample-signed.ova
When I run ovftool sample-signed.ova it has no manifest information. I tried unzipping the ova file and did exactly the same with ovf file but it didn't help either.
ovftool --version
VMware ovftool 4.1.0 (build-2459827)
OVA is a tar archive of
OVF (XML) file,
related resource files (e.g. disk VMDKs),
and that MF file containing hashes of files on the list above,
etc...? (e.g. CERT signature).
Signing process adds a CERT file (containing generated signature of the MF file, and the certificate). I'm note sure for now whether ovftool can operate on OVF or MF file itself. However, when running on the whole OVA archive, it also creates the MF file, if missing, into the new signed OVA (that's for ovftool-4.5.0-20459872).

Storing binary data in Google Secret Manager

I'm using Google Secret Manager for the first time to store some binary data. When I access the secret, it seems to have a different encoding or format.
The actual data is a certificate bundle, but I've been able to reproduce the issue using smaller binary data. Steps to reproduce:
Create a file with binary data:
echo -e -n '\xed\xfe' > secret.txt
Create the secret and version:
gcloud secrets create "my-secret" \
--data-file ./secret.txt \
--replication-policy "automatic"
Access the secret and save the result to a file:
gcloud secrets versions access latest --secret "my-secret" > result.txt
Compare the two files:
od -t x1 secret.txt # ed fe
od -t x1 result.txt # 3f 3f 0a
Why is the result different? Do I have to do something extra to get Google Secret Manager to work with binary data?
Secret Manager stores data exactly as given. Unfortunately there was a bug in the gcloud CLI tool that was adding an additional newline character to the end of a response.
This bug was fixed in gcloud v288.0.0. Please make sure you are using v288.0.0 or higher.
If you're concerned about local encoding issues, you should acquire the raw JSON response instead. This response will include the base64-encoded secret payload, which is much safer for transport:
gcloud secrets versions access latest --secret "my-secret" --format "json"
You can use a tool like jq to parse the JSON on the command line. Note, the secret payload data is base64-encoded, so you will need to decode the value prior to using it.
gcloud secrets versions access latest --secret "my-secret" --format "json" | \
jq -r .payload.data | \
base64 --decode > results_binary.txt
Verify:
od -t x1 results_binary.txt # ed fe

AWS EC2 pem key in txt

I am trying to launch aws ec2 server. I got a key pair, but my key looks like privatekey.pem.txt.
If I open it with text editor it looks like normal key, but how could I generate .pem file from it?
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAh89 ...
Have you tried simply renaming the file with a .pem extension? i.e. get rid of the .txt? My .pem file is also a text file (though not named as such) and it works just fine.
You can either use AWS generated PEM key or custom PEM key you have on your computer.
When you generate your key from AWS console or CLI, you just get a PEM file which is your private key and you can use this key in your ssh command line for example. If this key is renamed by either you or your OS(add a .txt), you can just get rid of it and rename it to <key>.pem
When you generate your key by yourself(in RSA format), you have to transform your public key to PEM format before uploading it to AWS. You can do it using the following command:
ssh-keygen -f rsa.pub -e -m pem
Of course, wherever your key was generated from, you have to change permission:
chmod 400 <key>.pem