AWS SecretManager Read and Write concurrency - amazon-web-services

AWS SecretManager Read and Write concurrency.
If I am writing new secret value to a secret and if at the same time read is performed,
does the read call waits for he write calls to complete?
Or read will retrieve some invalid or intermediate value of the keys stored inside the secret?

By default, the GetSecretValue API returns the version of the secret that has the AWSCURRENT stage. It also allows you to fetch older versions of a secret by specifying the VersionId. Versions are also immutable and if you call PutSecretValue you create a new version.
You won't get partial versions here - the label AWSCURRENT will only be switched to the new version once the update is complete. Everything else would result in a terrible user experience.

Related

AWS SageMaker Validation Error when calling the CreateTrainingJob operation: Input Hyper parameter includes AWS access keys or tokens

When creating a sagemaker training job using the sagemaker python package (using SKLearn or Tensorflow estimators), I pass in a number of hyperparameters. I create a single training job for each of our clients (we have 200), and the job gets named TrainModel-[client_id]-[int:1-50]-[uuid] (unsure if this is important, will explain why soon). The hyperparameters are all integers or strings, and none resemble an AWS access key or token. For 199 out of 200 clients, the training job is created and runs flawlessly.
For one, however, I get the following error: (ValidationException) when calling the CreateTrainingJob operation: Input Hyper parameter contains AWS access key id or access token. This validation is warned about in the documentation, and I have ensured that I do not include any AWS access key ids or tokens (otherwise how would the other 199 run without incident?).
I have examined the actual hyperparameters as they are recorded in the Estimator, and nothing is out of the ordinary. The only hyperparameters that change per client on a given day are the client name, client id, and the integer ranging 1-50 (which indicates a sub-division of the client).
I have read through the source code and cannot find the actual code that validates whether AWS access key ids and tokens exist in the inputs, and the Error and Traceback are utterly uninformative (I can't tell which parameter is the offender). Based on the qualities of the hyper parameters that change, the only culprit could possibly be the client name, which does not include anything to do with AWS keys or tokens. Is it possible that, for whatever reason, the validation function (I assume it's a regular expression) incorrectly identifies this particular client name, which again, has zero reference to anything in our AWS or in generic AWS terms, as an AWS key or token?
Has anyone encountered this before? If so what was your solution?
It turns out that, whatever the way AWS checks for access tokens and key IDs, it is not very accurate. It was detecting the client_name as an access token or key ID, which is was certainly not. Removing this parameter solved the issue. Watch out for random strings being miscategorized in this way.

How to renew a cloudformation created API Gateway API Key

I've created users with API Keys in a cloudformation yaml file. We want to renew one API Key but an API Key is immutable so has to be deleted and regenerated. Deleting an API Key manually and then hoping that rerunning the cloudformation script is going to replace it with no other ill effects seems like risky business. What is the recommended way to do this (I'd prefer not to drop and recreate the entire stack for availability reasons and because I only want to renew one of our API keys, not all of them)?
The only strategy I can think of right now is
change the stack so that the name associated with the API Key in question is changed
deploy the stack (which should delete the old API Key and create the new one)
change the stack to revert the 1st change which should leave me with a changed API Key
with same name
deploy the stack
Clunky eh!
It is indeed a bit clunky, but manually deleting it, will not cause cloudformation to recreate the API key, since it has an internal state of the stack in which the key still exists.
You could simply change the resource name of the API key and update the stack, but this will only work if you can have duplicate names for API keys, which I doubt, but I could not find confirmation in the docs.
This leaves the only way to do it, in two steps (if you want to keep the same name). One to remove the old key, and a second update to create the new key. This can be achieved by simply commenting the corresponding lines in the first step and subsequently uncommenting them for the second step, or as you suggested, by changing the name of the API key and then changing it back.

aws s3 | bucket key enabled

S3 has recently announced "bucket_key_enabled" option to cache the kms key used to encrypt the bucket contents so that the number of calls to the kms server is reduced.
https://docs.aws.amazon.com/AmazonS3/latest/dev/bucket-key.html
So if that the bucket is configured with
server side encryption enabled by default
use a kms key "key/arn1" for the above
by selecting "enable bucket key", we are caching "key/arn1" so that each object in this bucket does not require a call to kms server (perhaps internally it has time-to-live etc but the crust is that, this key is cached and thus kms limit errors can be avoided)
Given all that, what is the point of overriding kms key at object level and still having this "bucket_key_enabled" set?
Eg :
bucket/ -> kms1 & bucket_key_enabled
bucket/prefix1 -> kms2 & bucket_key_enabled
Does s3 actually cache the object-key to kms-key map?
To give you the context, I currently have the application which publishes data to the following structure
bucket/user1
bucket/user2
While publishing to these buckets, it explicitly passed kms key assigned per user for each object upload.
bucket/user1/obj1 with kms-user-1
bucket/user1/obj2 with kms-user-1
bucket/user1/obj3 with kms-user-1
bucket/user2/obj1 with kms-user-2
bucket/user2/obj2 with kms-user-2
bucket/user2/obj3 with kms-user-2
if s3 is smart enough to reduce this to the following map,
bucket/user1 - kms-user-1
bucket/user2 - kms-user-2
All I have to do is, upgrade the sdk library to latest version and add a withBucketKeyEnabled(true) to the putObjectRequest in the s3Client wrapper we have.
Let me know how it works internally so that we can make use of this feature wisely.
I finally went with upgrading the sdk to latest version and passing withBucketKeyEnabled(true) to putObject API calls.
I was able to prove with cloud trail that the number of calls to kms server is the same regardless of encryption and bucketKeyEnabled set at bucket level or at "each" object level.
kms-key and bucketKeyEnabled=true at bucket level. No encryption option is passed at putObject() call
Calls made to GenerateDataKey() = 10
Calls made to Decrypt() = 60
No encryption settings at s3 bucket. For each putObject() call, I am passing kms-key and bucketKeyEnabled=true.
PutObjectRequest(bucketName, key, inputStream, objectMetadata)
.withSSEAwsKeyManagementParams(SSEAwsKeyManagementParams(keyArn))
.withBucketKeyEnabled<PutObjectRequest>(true)
Calls made to GenerateDataKey() = 10
Calls made to Decrypt() = 60
With this option disabled like below,
PutObjectRequest(bucketName, key, inputStream, objectMetadata)
.withSSEAwsKeyManagementParams(SSEAwsKeyManagementParams(keyArn))
Calls made to GenerateDataKey() = 10011
Calls made to Decrypt() = 10002
Thus I was able to conclude that bucketKeyEnabled works regardless of whether you set at the bucket level or object level. Although, I do not know how it is optimized for both access patterns internally

Objects gets overwritten in S3 while governance mode along with legal hold is enabled

I'm an absolute beginner in AWS and have been practising for 3 months from now.
Recently I was working on S3 and playing a bit with S3 object lock. So I enabled S3 object lock for a specific object with governance mode along with legal hold. Now when I tried to overwrite the object with the same file using the following CLI command:
aws s3 cp /Users/John/Desktop/112133.jpg s3://my-buck/112133.jpg
It succeeded interestingly and I checked in the console that the new file is uploaded with Latest Version on it. Now I read this in AWS docs that:
Bypassing governance mode doesn't affect an object version's legal
hold status. If an object version has a legal hold enabled, the legal
hold remains in force and prevents requests to overwrite or delete the
object version.
Now my question is how it get overwritten if this CLI command is used to overwrite a file? I tried also in the console to re uplaod the same file but it also worked.
Moreover I uploaded another file and enabled ojbect lock with compliance mode and it also get overwritten. But deletion doesn't work for both cases as expected.
Did I understand something wrong about the whole S3 ojbect lock thing? Any help will be appreciated.
To quote the Object Lock documentation:
Object Lock works only in versioned buckets, and retention periods and
legal holds apply to individual object versions. When you lock an
object version, Amazon S3 stores the lock information in the metadata
for that object version. Placing a retention period or legal hold on
an object protects only the version specified in the request. It
doesn't prevent new versions of the object from being created.

Rotating keys and reactive reencypt data

I want to introduce key rotation to my system but for that reencryption is needed. It would be nice to do it reactively on some event, trigger etc., but I can't find anything like that at google documentation.
After a rotate event, I want to reencrypt data with the new key and destroy the old one.
Any ideas, how to achieve this goal?
As of right now, the best that you can do is write something that polls GetCryptoKey on regular intervals, checks to see if the primary version has changed, and then decrypts and reencrypts if it has.
We definitely understand the desire for eventing based on key lifecycle changes, and we've been thinking about the best way to accomplish that in the future. We don't have any plans to share yet, though.
When you rotate an encryption key (or when you enable scheduled rotation on a key), Cloud KMS does not automatically delete the old key version material. You can still decrypt data previously encrypted with the old key unless you manually disable/destroy that key version. You can read more about this in detail in the Cloud KMS Key rotation documentation.
While you may have business requirements, it's not a Cloud KMS requirement that you re-encrypt old data with the new key version material.
New data will be encrypted with the new key
Old data will be decrypted with the old key
At the time of this writing, Cloud KMS does not publish an event when a key is rotated. If you have a business requirement to re-encrypt all existing data with the new key, you could do one of the following:
Use Cloud Scheduler
Write a Cloud Function connected to Cloud Scheduler that invokes on a periodic basis. For example, if your keys rotate every 72 hours, you could schedule the cloud function to run every 24 hours. Happy to provide some sample code if that would help, but the OP didn't specifically ask for code.
Long-poll
Write a long-running function that polls the KMS API to check if the Primary crypto key has changed, and trigger your re-encryption when change is detected.