What's the most secure way of accessing s3 resource? - amazon-web-services

The question heading is broad but my question is not. I just want clarification on my approach. I have an s3 bucket with blocked public access. The bucket policy is set to http-referer. This is how it looks.
{
"Version": "2008-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.mysite.com and mysite.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::storage/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://www.example.com/*",
"https://example.com/*",
]
}
}
}
]
}
I still get an error if my frontend (on my tld) tries to access the s3 resource through the URL. (URL example - https://storage.s3.amazonaws.com/path/to/my/file.png).
I dropped the approach of hitting the s3 URL directly and decided to build a backend utility on my TLD that'd fetch the s3 resource in question and send it back to the frontend. So the URL would look something like this, https://<tld>/fetch-s3-resource/path/to/file.png.
I wanna know if this approach is correct or if there's a better one out there. In my mind even setting a http-referer policy doesn't make sense cause anyone can make a call to my bucket with http-referer manually set to my TLD.
UPDATE - I found out about signed URL's which should supposedly allow user's to publically access a resourec through the URL. This should solve my problem but I still have set public access to "off" and I don't really know which switch to toggle in order to allow for the user's with signed urls to be able to access the resource.
Here's the sample of s3 signed URL. Here's the link to the doc
import logging
import boto3
from botocore.exceptions import ClientError
def create_presigned_url(bucket_name, object_name, expiration=3600):
"""Generate a presigned URL to share an S3 object
:param bucket_name: string
:param object_name: string
:param expiration: Time in seconds for the presigned URL to remain valid
:return: Presigned URL as string. If error, returns None.
"""
# Generate a presigned URL for the S3 object
s3_client = boto3.client('s3')
try:
response = s3_client.generate_presigned_url('get_object',
Params={'Bucket': bucket_name,
'Key': object_name},
ExpiresIn=expiration)
except ClientError as e:
logging.error(e)
return None
# The response contains the presigned URL
return response
UPDATE UPDATE - Since the question isn't already clear enough, and it seems like I took some liberties with my "loose" language, let me clarify some things.
1 - What am I actually trying to do?
I want to keep my s3 bucket secure in such a way that only user's with presigned URL's generated by "me" can access whatever resource is there.
2 - When I ask if "my approach" is better or if there's any other approach what do I mean by that?
I wanna know if there's a "native" / aws provided way of accessing the bucket without having to write a backend endpoint that'd fetch the resource and throw it back on the frontend.
3 - How do I measure one approach against another ?
I think this one is quite obvious, you don't try to write an authentication flow from scratch if there's one provided by your framework*. This logic applies here too, if there's a way to access the objects that's listed by AWS then I probably shouldn't go about writing my own "hack"

Presigned URLs came through. You can have all public access blocked and still be able to generate signed URL's and serve it the frontend. I've already linked the official documentation in my question, here's the final piece of code I ended up with.
def create_presigned_url(bucket_name, bucket_key, expiration=3600, signature_version='s3v4'):
"""Generate a presigned URL for the S3 object
:param bucket_name: string
:param bucket_key: string
:param expiration: Time in seconds for the presigned URL to remain valid
:param signature_version: string
:return: Presigned URL as string. If error, returns None.
"""
s3_client = boto3.client('s3',
aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
config=Config(signature_version=signature_version),
region_name='us-east-1'
)
try:
response = s3_client.generate_presigned_url('get_object',
Params={'Bucket': bucket_name,
'Key': bucket_key},
ExpiresIn=expiration)
except ClientError as e:
logging.error(e)
return None
# The response contains the pre-signed URL
return response

Related

How do you setup AWS Cloudfront to provide custom access to S3 bucket with signed cookies using wildcards

AWS Cloudfront with Custom Cookies using Wildcards in Lambda Function:
The problem:
On AWS s3 Storage to provide granular access control the preferred method is to use AWS Cloudfront with signed URL's.
Here is a good example how to setup cloudfront a bit old though, so you need to use the recommended settings not
the legacy and copy the generated policy down to S3.
https://medium.com/#himanshuarora/protect-private-content-using-cloudfront-signed-cookies-fd9674faec3
I have provided an example below on how to create one of these signed URL's using Python and the newest libraries.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-canned-policy.html
However this requires the creation of a signed URL for each item in the S3 bucket. To give wildcard access to a
directory of items in the S3 bucket you need use what is called a custom Policy. I could not find any working examples
of this code using Python, many of the online expamples have librarys that are depreciated. But attached is a working example.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-custom-policy.html
I had trouble getting the python cryptography package to work by building the lambda function on an Amazon Linux 2
instance on AWS EC2. Always came up with an error of a missing library. So I use Klayers for AWS and worked
https://github.com/keithrozario/Klayers/tree/master/deployments.
A working example for cookies for a canned policy (Means only a signed URL specific for each S3 file)
https://www.velotio.com/engineering-blog/s3-cloudfront-to-deliver-static-asset
My code for cookies for a custom policy (Means a single policy statement with URL wildcards etc). You must use the Cryptology
package type examples but the private_key.signer function was depreciated for a new private_key.sign function with an extra
argument. https://cryptography.io/en/latest/hazmat/primitives/asymmetric/rsa/#signing
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
import base64
import datetime
class CFSigner:
def sign_rsa(self, message):
private_key = serialization.load_pem_private_key(
self.keyfile, password=None, backend=default_backend()
)
signature = private_key.sign(message.encode(
"utf-8"), padding.PKCS1v15(), hashes.SHA1())
return signature
def _sign_string(self, message, private_key_file=None, private_key_string=None):
if private_key_file:
self.keyfile = open(private_key_file, "rb").read()
elif private_key_string:
self.keyfile = private_key_string.encode("utf-8")
return self.sign_rsa(message)
def _url_base64_encode(self, msg):
msg_base64 = base64.b64encode(msg).decode("utf-8")
msg_base64 = msg_base64.replace("+", "-")
msg_base64 = msg_base64.replace("=", "_")
msg_base64 = msg_base64.replace("/", "~")
return msg_base64
def generate_signature(self, policy, private_key_file=None):
signature = self._sign_string(policy, private_key_file)
encoded_signature = self._url_base64_encode(signature)
return encoded_signature
def create_signed_cookies2(self, url, private_key_file, keypair_id, expires_at):
policy = self.create_custom_policy(url, expires_at)
encoded_policy = self._url_base64_encode(
policy.encode("utf-8"))
signature = self.generate_signature(
policy, private_key_file=private_key_file)
cookies = {
"CloudFront-Policy": encoded_policy,
"CloudFront-Signature": signature,
"CloudFront-Key-Pair-Id": keypair_id,
}
return cookies
def sign_to_cloudfront(object_url, expires_at):
cf = CFSigner()
url = cf.create_signed_url(
url=object_url,
keypair_id="xxxxxxxxxx",
expire_time=expires_at,
private_key_file="xxx.pem",
)
return url
def create_signed_cookies(self, object_url, expires_at):
cookies = self.create_signed_cookies2(
url=object_url,
private_key_file="xxx.pem",
keypair_id="xxxxxxxxxx",
expires_at=expires_at,
)
return cookies
def create_custom_policy(self, url, expires_at):
return (
'{"Statement":[{"Resource":"'
+ url
+ '","Condition":{"DateLessThan":{"AWS:EpochTime":'
+ str(round(expires_at.timestamp()))
+ "}}}]}"
)
def lambda_handler(event, context):
response = event["Records"][0]["cf"]["response"]
headers = response.get("headers", None)
cf = CFSigner()
path = "https://www.example.com/*"
expire = datetime.datetime.now() + datetime.timedelta(days=3)
signed_cookies = cf.create_signed_cookies(path, expire)
headers["set-cookie"] = [{
"key": "set-cookie",
"value": "CloudFront-Policy={signed_cookies.get('CloudFront-Policy')}"
}]
headers["Set-cookie"] = [{
"key": "Set-cookie",
"value": "CloudFront-Signature={signed_cookies.get('CloudFront-Signature')}",
}]
headers["Set-Cookie"] = [{
"key": "Set-Cookie",
"value": "CloudFront-Key-Pair-Id={signed_cookies.get('CloudFront-Key-Pair-Id')}",
}]
print(response)
return response ```

Uploading to Amazon S3 using a signed URL

I'm implementing a file uploading functionality that would be used by an angular application. But I am having numerous issues getting it to work. And need help figuring out what I am missing. Here is an overview of the resources in place, and the testing and results I'm getting.
Infrastructure
I have an Amazon S3 bucket created with versioning enabled, encryption enabled and all public access is blocked.
An API gateway with a Lambda function that generates a pre-signed URL. The code is shown below.
def generate_upload_url(self):
try:
conditions = [
{"acl": "private"},
["starts-with", "$Content-Type", ""]
]
fields = {"acl": "private"}
response = self.s3.generate_presigned_post(self.bucket_name,
self.file_path,
Fields=fields,
Conditions=conditions,
ExpiresIn=3600)
except ClientError as e:
logging.error(e)
return None
return response
The bucket name and file path are set as part of the class constructor. In this example the bucket and file path are
def construct_file_names(self):
self.file_path = self.account_number + '-' + self.user_id + '-' + self.experiment_id + '-experiment-data.json'
self.bucket_name = self.account_number + '-' + self.user_id + '-resources'
Testing via Postman
Before implementing it within my angular application. I am testing the upload functionality via Postman.
The response from my API endpoint for the pre-signed URL is shown below
Using these values, I make another API call from Postman and receive the response below
If anybody can see what I might be doing wrong here. I have played around with different fields in the boto3 method, but ultimately, I am getting 403 errors with different messages related to Policy conditions. Any help would be appreciated.
Update 1
I tried to adjust the order of "file" and "acl" but received another error shown below
Update Two - Using signature v4
I updated the pre-signed URL code, shown below
def upload_data(x):
try:
config = Config(
signature_version='s3v4',
)
s3 = boto3.client('s3', "eu-west-1", config=config)
sts = boto3.client('sts', "eu-west-1")
data_upload = UploadData(x["userId"], x["experimentId"], s3, sts)
return data_upload.generate_upload_url()
except Exception as e:
logging.error(e)
When the Lambda function is triggered by the API call, the following is received by Postman
Using the new key values returned from the API, I proceeded to try another test upload. The results are shown below
Once again an error but I think I'm going in the correct direction.
Try moving acl above the file row. Make sure file is at the end.
I finally got this working, so I will post an answer here summarising the steps taken.
Python Code for generating pre-signed URL via boto in eu-west-1
Use signature v4 signing - https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
def upload_data(x):
try:
config = Config(
signature_version='s3v4',
)
s3 = boto3.client('s3', "eu-west-1", config=config)
sts = boto3.client('sts', "eu-west-1")
data_upload = UploadData(x["userId"], x["experimentId"], s3, sts)
return data_upload.generate_upload_url()
except Exception as e:
logging.error(e)
def generate_upload_url(self):
try:
conditions = [
{"acl": "private"},
["starts-with", "$Content-Type", ""]
]
fields = {"acl": "private"}
response = self.s3.generate_presigned_post(self.bucket_name,
self.file_path,
Fields=fields,
Conditions=conditions,
ExpiresIn=3600)
except ClientError as e:
logging.error(e)
return None
return response
Uploading via Postman
Ensure the order is correct with "file" being last
Ensure "Content-Type" matches what you have in the code to generate the URL. In my case it was "". Once added, the conditions error received went away.
S3 Bucket
Enable a CORS policy if required. I needed one and it is shown below, but this link can help - https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
Upload via Angular
My issue arose during testing with Postman, but I was implementing the functionality to be used by an Angular application. Here is a code snippet of how the upload is done by first calling the API for the pre-signed URL, and then uploading directly.
Summary
Check your S3 infrastructure
Enable CORS if need be
Use sig 4 explicitly in the SDK of choice, if working in an old region
Ensure the form data order is correct
Hope all these pieces help others who are trying to achieve the same. Thanks for all the hints from SO members.

Bypassing need for x-amz-cf-id header inclusion in S3 auth in cloudfront

I have a not completely orthodox CF->S3 setup. The relevant components here are:
Cloudfront distribution with origin s3.ap-southeast-2.amazonaws.com
Lambda#Edge function (Origin Request) that adds a S3 authorisation (version 2) query string (Signed using the S3 policy the function uses).
The request returned from Lambda is completely correct. If I log the uri, host and query string I get the file I am requesting. However, if I access it through the Cloudfront link directly, the request fails because it no longer uses the AWSAccessKeyID, instead it opts to use x-amz-cf-id (but uses the same Signature, Amz-Security-Token etc). CORRECTION: it may not replace, but be required in addition to.
I know this is the case because I have returned both the
StringToSign and the SignatureProvided. These both match the Lambda response except for the AWSAccessKeyID which has been replaced with the x-amz-cf-id.
This is a very specific question obviously. I may have to look at remodelling this architecture but I would prefer not to. There are several requirements which has led me down this not completely regular setup.
I believe the AWSAccessKeyID => x-amz-cf-id replacement is the result of two mechanisms:
First, you need to configure CloudFront to forward the query parameters to the origin. Without that, it will strip all parameters. If you use S3 signed URLs, make sure to also cache based on all parameters as otherwise you'll end up without any access control.
Second, CloudFront attaches the x-amz-cf-id to the requests that are not going to an S3 origin. You can double-check at the CloudFront console the origin type and you need to make sure it is reported as S3. I have a blog post describing it in detail.
But adding the S3 signature to all the requests with Lambda#Edge defeats the purpose. If you want to keep the bucket private and only allow CloudFront to access it then use the Origin Access Identity, that is precisely for the use-case.
So it seems like with Authentication V2 or V4, the x-amz-cf-id header that's appended to the origin request and inaccessible by the Lambda#Edge origin request function must be included in the authentication string. This is not possible.
The simple solution is to use the built-in S3 integration in Cloudflare, use a Lambda#Edge origin request function that switches the bucket if like me, that's your desired goal. For each bucket you want to use, add the following policy to allow your CF distribution to access the objects within the bucket.
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity <CloudfrontID>"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::<bucket-name>/*"
}
]
}
CloudfrontID refers to the ID under Origin Access Identity, not the Amazon S3 Canonical ID.
X-amz-cf-id is a reserved header of CF and it could be get by event as event['Records'][0]['cf']['config']['requestId']. You don't have to calculate Authentication V4 with X-amz-cf-id.
I had alike task of returning S3 signed URL from a CloudFront origin request Lambda#Edge. Here is what I found:
If your S3 bucket does not have dots in the name you can use S3 origin in CloudFront, use domain name in the form of <bucket_name>.s3.<region>.amazonaws.com and generate signed URL e.g. via getSignedUrl from #aws-sdk/s3-request-presigner. CloudFront should be configured to pass URL query to the origin. Do not grant CloudFront access to S3 bucket in this case: presigned URL will grant access to the bucket.
However, when your bucket does have dots in the name, the signed URL produced by the function will have path-style URL and you will need to use CloudFront custom origin with s3.<region>.amazonaws.com domain. When using custom origin, CloudFront adds "x-amz-cf-id" header to the request to the origin. Quite inconveniently, value of the header should be signed. However, provided you do not change the origin domain in the Lambda#Edge return value, CloudFront seems to use the same value for "x-amz-cf-id" header as passed to the lambda event in event.Records[0].cf.config.requestId field. You can then generate S3 signed URL with the value of the header. With AWS JavaScript SDK v3 this can be done using S3Client.middlewareStack.add.
Here is an example of a JavaScript Lambda#Edge producing S3 signed URL with "x-amz-cf-id" header:
const {S3Client, GetObjectCommand} = require("#aws-sdk/client-s3");
const {getSignedUrl} = require("#aws-sdk/s3-request-presigner");
exports.handler = async function handler(event, context) {
console.log('Request: ', JSON.stringify(event));
let bucketName = 'XXX';
let fileName = 'XXX';
let bucketRegion = 'XXX';
// Pre-requisite: this Lambda#Edge function has 's3:GetObject' permission for bucket ${bucketName}, otherwise you will get AccessDenied
const command = new GetObjectCommand({
Bucket: bucketName, Key: fileName,
});
const s3Client = new S3Client({region: bucketRegion});
s3Client.middlewareStack.add((next, context) => async (args) => {
args.request.headers["x-amz-cf-id"] = event.Records[0].cf.config.requestId;
return await next(args);
}, {
step: "build", name: "addXAmzCfIdHeaderMiddleware",
});
let signedS3Url = await getSignedUrl(s3Client, command, {
signableHeaders: new Set(["x-amz-cf-id"]), unhoistableHeaders: new Set(["x-amz-cf-id"])
});
let parsedUrl = new URL(signedS3Url);
const request = event.Records[0].cf.request;
if (!request.origin.custom || request.origin.custom.domainName != parsedUrl.hostname) {
return {
status: '500',
body: `CloudFront should use custom origin configured to the matching domain '${parsedUrl.hostname}'.`,
headers: {
'content-type': [{key: 'Content-Type', value: 'text/plain; charset=UTF-8',}]
}
};
}
request.querystring = parsedUrl.search.substring(1); //drop '?'
request.uri = parsedUrl.pathname;
console.log('Response: ', JSON.stringify(request));
return request;
}

RDS generate_presigned_url does not support the DestinationRegion parameter

I was trying to set up encrypted RDS replica in another region, but I got stuck on generating pre-signed URL.
It seems that boto3/botocore does not allow DestinationRegion parameter, which is defined as a requirement on AWS API (link) in case we want to generate PreSignedUrl.
Versions used:
boto3 (1.4.7)
botocore (1.7.10)
Output:
botocore.exceptions.ParamValidationError: Parameter validation failed:
Unknown parameter in input: "DestinationRegion", must be one of: DBInstanceIdentifier, SourceDBInstanceIdentifier, DBInstanceClass, AvailabilityZone, Port, AutoMinorVersionUpgrade, Iops, OptionGroupName, PubliclyAccessible, Tags, DBSubnetGroupName, StorageType, CopyTagsToSnapshot, MonitoringInterval, MonitoringRoleArn, KmsKeyId, PreSignedUrl, EnableIAMDatabaseAuthentication, SourceRegion
Example code:
import boto3
url = boto3.client('rds', 'eu-east-1').generate_presigned_url(
ClientMethod='create_db_instance_read_replica',
Params={
'DestinationRegion': 'eu-east-1',
'SourceDBInstanceIdentifier': 'abc',
'KmsKeyId': '1234',
'DBInstanceIdentifier': 'someidentifier'
},
ExpiresIn=3600,
HttpMethod=None
)
Same issue was already reported but got closed.
Thanks for help,
Petar
Generate Pre signed URL from the source region, then populate the create_db_instance_read_replica with that url.
The presigned URL must be a valid request for the CreateDBInstanceReadReplica API action that can be executed in the source AWS Region that contains the encrypted source DB instance
PreSignedUrl (string) --
The URL that contains a Signature Version 4 signed request for the CreateDBInstanceReadReplica API action in the source AWS Region that contains the source DB instance.
import boto3
session = boto3.Session(profile_name='profile_name')
url = session.client('rds', 'SOURCE_REGION').generate_presigned_url(
ClientMethod='create_db_instance_read_replica',
Params={
'DBInstanceIdentifier': 'db-1-read-replica',
'SourceDBInstanceIdentifier': 'database-source',
'SourceRegion': 'SOURCE_REGION'
},
ExpiresIn=3600,
HttpMethod=None
)
print(url)
source_db = session.client('rds', 'SOURCE_REGION').describe_db_instances(
DBInstanceIdentifier='database-SOURCE'
)
print(source_db)
response = session.client('rds', 'DESTINATION_REGION').create_db_instance_read_replica(
SourceDBInstanceIdentifier="arn:aws:rds:SOURCE_REGION:account_number:db:database-SOURCE",
DBInstanceIdentifier="db-1-read-replica",
KmsKeyId='DESTINATION_REGION_KMS_ID',
PreSignedUrl=url,
SourceRegion='SOURCE'
)
print(response)

Cloudfront signed cookies issue, getting 403

We have used CloudFront to store image URLs and using signed cookies to provide access only through our application. Without signed cookies we are able to access contents but after enabling signed cookies we are getting HTTP 403.
Below is configuration/cookies we are sending:
Cookies going with the request:
CloudFront-Expires: 1522454400
CloudFront-Key-Pair-Id: xyz...
CloudFront-Policy: abcde...
CloudFront-Signature: abce...
Here is our CloudFront policy:
{
"Statement": [
{
"Resource":"https://*.abc.com/*",
"Condition":{
"DateLessThan":{"AWS:EpochTime":1522454400}
}
}
]
}
The cookie domain is .abc.com, and the resource path is https://*.abc.com/*.
We are using CannedPolicy to create CloudFront cookies.
Why isn't this working as expected?
I have got solution.Our requirement was wildcard access.
CloudFrontCookieSigner.getCookiesForCustomPolicy(
this.resourcePath,
pk,
this.keyPairId,
expiresOn,
null,
"0.0.0.0/0"
);
Where:
resource path = https+ "distribution name" + /*
activeFrom = it is optional so pass it as null
pk = private key ( few api also take file but it didn't work, so get the private key from file and use above function)
we want to access all contents under distribution, canned policy doesn't allow wildcard. So, we changed it to custom policy and it worked.
Review the documentation again
There are only 3 cookies, with the last being either CloudFront-Expires for a canned policy, or CloudFront-Policy for a custom policy.
We are using CannedPolicy
A canned policy has an implicit resource of *, so a canned policy statement cannot have an explicit Resource, so you are in fact using a custom policy. If all else is implemented correctly, your solution may simply be to remove the CloudFront-Expires cookie, which isn't used with a custom policy.
"Canned" (bottled, jugged, pre-packaged) policies are used in cases where the only unique information in the policy is the expiration. Their advantage is that they require marginally less bandwidth (and make shorter URLs when creating signed URLs). Their disadvantage is that they are wildcards by design, which is not always what you want.
While there can be multiple reasons for 403 - AccessDenied as response. In our case, after debugging, we learnt that when using signed cookies - the CloudFront-Key-Pair-Id cookie remains same for every request while CloudFront-Policy and CloudFront-Signature cookies change values per request, otherwise 403 access denied will occur.
For anyone still struggling with this today or looking for clarification: You need to generate a custom policy if you would like to use wildcards in the resource url i.e. https://mycdn.abc.com/protected-content-folder/*
The AWS Cloudfront API has changed over the years; Currently, the easiest way to generate signed Cloudfront cookies or urls is via the AWS SDK, if available in your language of choice. Here is an example in NodeJS using the Javascriptv3 AWS SDK:
const { getSignedCookies } = require("#aws-sdk/cloudfront-signer");
// Read in your private_key.pem that you generated.
const privateKey = fs.readFileSync("./.secrets/private_key.pem", {
encoding: "utf8",
});
const resource = `https://mycdn.abc.com/protected-content-folder/*`;
const dateLessThan = 1658593534;
const policyStr = JSON.stringify({
Statement: [
{
Resource: resource,
Condition: {
DateLessThan: {
"AWS:EpochTime": dateLessThan,
},
},
},
],
});
const cfCookies = getSignedCookies({
keyPairId,
privateKey,
policy: policyStr,
});
// Set CloudFront cookies using your web framework of choice
const cfCookieConfig = {
httpOnly: true,
secure: process.env.NODE_ENV === "production" ? true : false,
sameSite: "lax",
signed: false,
expires: expiryDate,
domain: ".abc.com",
};
for (let cookie in cfCookies) {
ctx.cookies.set(cookie, cfCookies[cookie], { ...cfCookieConfig });
}
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-cookies.html