{"address":"5CVVapxEBkvGnK2t1xMd2rbg2P1EekRnNNzPZPWFxwpzcJbv","encoded":"0xbf3c994369b9d481cb42e25e7a4e39961ef40c0c60d1bb0d6bcf589bbff62d69961c5c9baeca0c2c884c2fff6fea48665e24652d7c8078e505338ded954e41e2453f37937d4afbf84573178f511ae5b84e903ff989f852b70c967ddddbd223a7fd1985a25690cdd632521e45049ec882ca4ca01caf5691d4fd0fdc6375e1b3d74146e5ed9d40de4b32f35757d8685642ed6db0cabfd49b1bc09b369624","encoding":{"content":["pkcs8","ed25519"],"type":"xsalsa20-poly1305","version":"2"},"meta":{"name":"aravind","tags":["a"],"whenCreated":1637583415090}}
This is one of the account file downloaded from substrate.... What is key-value pair encoded denotes above??
are there any docs on how accounts are created on substrate??(detailed docs)
First, this is not substrate, it is polkadot-js-extension and appsd .
About the substrate aspect see: https://docs.substrate.io/v3/advanced/ss58.
About polkadot-js-extension:
the encoded is your private key, encrypted with your provided password. The rest of the filed are pretty self-explanatory.
https://wiki.polkadot.network/docs/learn-account-generation#storing-your-accounts-json-file
https://wiki.polkadot.network/docs/learn-account-restore
Related
We have RSA key pairs generated on on-prem and plan to sync them to GCP-KMS. There is an yearly key rotation policy which would be done on on-prem and new key_versions would be synced to KMS. My concern is with the KMS API.
Problem: The API always asks for the 'key_version' as an argument to encrypt/decrypt a file.
Desired behaviour: During decryption, is it not possible that the KMS sees the certificate thumbprint and returns the appropriate key version to decrypt a given encrypted file? e.g. a DEK wrapped with the RSA_public when supplied to KMS gets decrypted by the RSA_Private(or KEK) of the correct version.
If yes, is there any documentation that elaborates on this use case?
According to the documentation, you can achieve that with symmetric signature (no key version specified), but you can't with the asymetricDecrypt (key version is required in the URL path of the API)
Scenario
I have Full text search requirement which can search inside the document. I am uploading documents in s3 bucket and encrypting it using envelope encryption.
Can we do full text search in encrypted document(in S3 bucket). If yes what are the rest API(NodeJS API) for the same.
Example => bucket1 =>Encrypted content in the files
bucket1/abc.pdf
bucket1/def.doc
bucket1/ghi.txt
and I want to search text like "I am from planet earth" in the above files.
I want in result file name(s) with above text.
Solution
I am reading following article:
aws article here
encryption of data at rest
Problem
Will it works if s3 bucket data is encrypted?
What will be the best solution for this scenario?
Elasticsearch does not search inside documents, you need to index the content of the documents inside elasticsearch to be able to perform searchs, it also does not support search on encrypted data, the data needs to be stored in clear text.
What you can do is configure SSL/TLS and authentication on Elasticsearch, so you only will be able to make requests if you use the correct certificate and a username and password.
I'm using the Amazon Encryption SDK to encrypt data before storing it in a database. I'm also using Amazon KMS. As part of the encryption process, the SDK stores the Key Provider ID of the data key used to encrypt in the generated cipher-text header.
As described in the documentation here http://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/message-format.html#header-structure
The encryption operations in the AWS Encryption SDK return a single
data structure or message that contains the encrypted data
(ciphertext) and all encrypted data keys. To understand this data
structure, or to build libraries that read and write it, you need to
understand the message format.
The message format consists of at least two parts: a header and a
body. In some cases, the message format consists of a third part, a
footer.
The Key Provider ID value contains the Amazon Resource Name (ARN) of the AWS KMS customer master key (CMK).
Here is where the issue comes in. Right now I have two different KMS regions available for encryption. Each Key Provider ID has the exact same Encrypted Data Key value. So either key could be used to decrypt the data. However, the issue is with the ciphertext headers. Let's say I have KMS1 and KMS2. If I encrypt the data with the key provided by KMS1, then the Key Provider ID will be stored in the ciphertext header. If I attempt to decrypt the data with KMS2, even though the Encrypted Data Key is the same, the decryption will fail because the header does not contain the Key Provider for KMS2. It has the Key Provider ID for KMS1. It fails with this error:
com.amazonaws.encryptionsdk.exception.BadCiphertextException: Header integrity check failed.
at com.amazonaws.encryptionsdk.internal.DecryptionHandler.verifyHeaderIntegrity(DecryptionHandler.java:312) ~[application.jar:na]
at com.amazonaws.encryptionsdk.internal.DecryptionHandler.readHeaderFields(DecryptionHandler.java:389) ~[application.jar:na]
...
com.amazonaws.encryptionsdk.internal.DecryptionHandler.verifyHeaderIntegrity(DecryptionHandler.java:310) ~[application.jar:na]
... 16 common frames omitted
Caused by: javax.crypto.AEADBadTagException: Tag mismatch!
It fails to verify the header integrity and fails. This is not good, because I was planning to have multiple KMS's in case of one region KMS failing. We duplicate our data across all our regions, and we thought that we could use any KMS from the regions to decrypt as long as the encrypted data keys match. However, it looks like I'm locked into using only the original KMS that was encrypting the data? How on earth can we scale this to multiple regions if we can only rely on a single KMS?
I could include all the region master keys in the call to encrypt the data. That way, the headers would always match, although it would not reflect which KMS it's actually using. However, that's also not scalable, since we could add/remove regions in the future, and that would cause issues with all the data that's already encrypted.
Am I missing something? I've thought about this, and I want to solve this problem without crippling any integrity checks provided by the SDK/Encryption.
Update:
Based on a comment from #jarmod
Using an alias doesn't work either because we can only associate an alias to a key in the region, and it stores the resolved name of the key ARN it's pointing to anyway.
I'm reading this document and it says
Additionally, envelope encryption can help to design your application
for disaster recovery. You can move your encrypted data as-is between
Regions and only have to reencrypt the data keys with the
Region-specific CMKs
However, that's not accurate at all, because the encryption SDK will fail to decrypt on a different region because the Key Provider ID of the re-encrypted data keys will be totally different!
Apologies since I'm not familiar with Java programming, but I believe there is confusion how you are using the KMS CMKs to encrypt (or decrypt) the data using keys from more than one-region for DR.
When you use multiple master keys to encrypt plaintext, any one of the master keys can be used to decrypt the plaintext. Note that, only one master key (let's say MKey1) generates the plaintext data key which is used to encrypt the data. This plaintext data key is then encrypted by the other master key (MKey2) as well.
As a result, you will have encrypted data + encrypted data key (using MKey1) + encrypted data key (using MKey2).
If for some reason MKey1 is unavailable and you want to decrypt the ciphertext, SDK can be used to decrypt the encrypted data key using MKey2, which can decrypt the ciphertext.
So, yes, you have to specify multiple KMS CMK ARN in your program if you want to use multiple KMS. The document shared by you has an example as well which I'm sure you are aware of.
Regarding the AWS S3 tool "sync" and a "customer-provided encryption key", it says here,
--sse-c-key (string) The customer-provided encryption key to use to server-side encrypt the object in S3. If you provide this value,
--sse-c be specfied as well. The key provided should not be base64 encoded.
How does one supply a key on the command line that is not base64 encoded?
If the key is not base64 encoded, then surely some of the key's bytes would not be expressible as characters?
At first glance, this seems like a HUGE oversight in the aws cli. However, buried deep in the CLI documentation is a blurb on how to provide binary data on the command line.
https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-parameters-file.html
(updated link per #Chris's comment)
This did in fact work for me...
aws s3 cp --sse-c AES256 --sse-c-key fileb://key.bin large_file s3://mybucket/
The fileb:// part is the answer
I'm trying to import an existing keypair from my computer to use in EC2. But once I click "Yes, Import", the fingerprint Amazon shows doesn't match the fingerprint shown by ssh -lf for the same key. I've verified that they're the same key, tried reimporting the key, etc. The common practice seems to be to use the "Create Key Pair" part instead, but I'd prefer to use my usual SSH keypair. I'm also unable to login using SSH into an instance that's set to use this keypair (I get Permission denied (publickey).).
Has anyone encountered such issues with AWS? Any insights into what the issue might be?
There seems to be an answer in the AWS forums for the fingerprint difference. I'm pasting the content here for posterity:
Hello,
I discussed with my colleagues and looks like it is a limitation from
our end to provide keypair in different format. You'll notice the
different lengths of the Amazon-generated Key Pair and the Import Key
Pair. In the case of an Amazon-generated Key Pair, the Fingerprint is
for the Private Key, while if you use Import Key Pair the fingerprint
is for your public key. Amazon does not retain a copy of the generated
Private Key, but the EC2 command line tools do provide a way to
reproduce the SSH2 MD5 fingerprint:
ec2-fingerprint-key ./testpair1-private.pem
61:26:cc:7d:2a:2c:a4:e9:fb:86:ca:ef:57:d6:68:f8:24:bc:59:cd
This should match what you see in the console for the region in which
you created the key, such as US-West-1 (North California).
Unfortunately the ec2-fingerprint-key command-line tool does not
fingerprint public keys. If you import the public key in another
region such as US-East-1, the web AWS Console will only display the
fingerprint of the public key.
Secondly, the AWS Console should be more clear on exactly what type of
fingerprint it displays, which is the "MD5 public key fingerprint as
specified in section 4 of RFC4716" (also known as SSH2 format) as
mentioned here:
http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-ImportKeyPair.html
We have already put in a feature request for the web-based AWS Console
to support the more common OpenSSH format. Unfortunately I was not
able to find any user-friendly tools to generate the SSH2/RFC4716
format fingerprint, though I did find that you can import the same
public key in your original region (with a name such as "Test2") and
match the shown fingerprint between regions.
(emphases mine)
As he mentions, I too wasn't able to locate any tool to generate the SSH2/RFC4716 format fingerprint. This at least solves the mystery of mismatching fingerprints (at least if we assume ssh-keygen -lf gives output in the "more common OpenSSH format", please correct me if this assumption is wrong); I'm still getting a Permission denied (publickey) when i try to ssh, but I'll assume it's not an actual key mismatch now and explore other avenues.
Here's an alternative way to verify finger print:
openssl pkcs8 -in my-aws-key.pem -nocrypt -topk8 -outform DER | openssl sha1 -c