I was trying to set up encrypted RDS replica in another region, but I got stuck on generating pre-signed URL.
It seems that boto3/botocore does not allow DestinationRegion parameter, which is defined as a requirement on AWS API (link) in case we want to generate PreSignedUrl.
Versions used:
boto3 (1.4.7)
botocore (1.7.10)
Output:
botocore.exceptions.ParamValidationError: Parameter validation failed:
Unknown parameter in input: "DestinationRegion", must be one of: DBInstanceIdentifier, SourceDBInstanceIdentifier, DBInstanceClass, AvailabilityZone, Port, AutoMinorVersionUpgrade, Iops, OptionGroupName, PubliclyAccessible, Tags, DBSubnetGroupName, StorageType, CopyTagsToSnapshot, MonitoringInterval, MonitoringRoleArn, KmsKeyId, PreSignedUrl, EnableIAMDatabaseAuthentication, SourceRegion
Example code:
import boto3
url = boto3.client('rds', 'eu-east-1').generate_presigned_url(
ClientMethod='create_db_instance_read_replica',
Params={
'DestinationRegion': 'eu-east-1',
'SourceDBInstanceIdentifier': 'abc',
'KmsKeyId': '1234',
'DBInstanceIdentifier': 'someidentifier'
},
ExpiresIn=3600,
HttpMethod=None
)
Same issue was already reported but got closed.
Thanks for help,
Petar
Generate Pre signed URL from the source region, then populate the create_db_instance_read_replica with that url.
The presigned URL must be a valid request for the CreateDBInstanceReadReplica API action that can be executed in the source AWS Region that contains the encrypted source DB instance
PreSignedUrl (string) --
The URL that contains a Signature Version 4 signed request for the CreateDBInstanceReadReplica API action in the source AWS Region that contains the source DB instance.
import boto3
session = boto3.Session(profile_name='profile_name')
url = session.client('rds', 'SOURCE_REGION').generate_presigned_url(
ClientMethod='create_db_instance_read_replica',
Params={
'DBInstanceIdentifier': 'db-1-read-replica',
'SourceDBInstanceIdentifier': 'database-source',
'SourceRegion': 'SOURCE_REGION'
},
ExpiresIn=3600,
HttpMethod=None
)
print(url)
source_db = session.client('rds', 'SOURCE_REGION').describe_db_instances(
DBInstanceIdentifier='database-SOURCE'
)
print(source_db)
response = session.client('rds', 'DESTINATION_REGION').create_db_instance_read_replica(
SourceDBInstanceIdentifier="arn:aws:rds:SOURCE_REGION:account_number:db:database-SOURCE",
DBInstanceIdentifier="db-1-read-replica",
KmsKeyId='DESTINATION_REGION_KMS_ID',
PreSignedUrl=url,
SourceRegion='SOURCE'
)
print(response)
Related
AWS Cloudfront with Custom Cookies using Wildcards in Lambda Function:
The problem:
On AWS s3 Storage to provide granular access control the preferred method is to use AWS Cloudfront with signed URL's.
Here is a good example how to setup cloudfront a bit old though, so you need to use the recommended settings not
the legacy and copy the generated policy down to S3.
https://medium.com/#himanshuarora/protect-private-content-using-cloudfront-signed-cookies-fd9674faec3
I have provided an example below on how to create one of these signed URL's using Python and the newest libraries.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-canned-policy.html
However this requires the creation of a signed URL for each item in the S3 bucket. To give wildcard access to a
directory of items in the S3 bucket you need use what is called a custom Policy. I could not find any working examples
of this code using Python, many of the online expamples have librarys that are depreciated. But attached is a working example.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-creating-signed-url-custom-policy.html
I had trouble getting the python cryptography package to work by building the lambda function on an Amazon Linux 2
instance on AWS EC2. Always came up with an error of a missing library. So I use Klayers for AWS and worked
https://github.com/keithrozario/Klayers/tree/master/deployments.
A working example for cookies for a canned policy (Means only a signed URL specific for each S3 file)
https://www.velotio.com/engineering-blog/s3-cloudfront-to-deliver-static-asset
My code for cookies for a custom policy (Means a single policy statement with URL wildcards etc). You must use the Cryptology
package type examples but the private_key.signer function was depreciated for a new private_key.sign function with an extra
argument. https://cryptography.io/en/latest/hazmat/primitives/asymmetric/rsa/#signing
from cryptography.hazmat.primitives import serialization
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
import base64
import datetime
class CFSigner:
def sign_rsa(self, message):
private_key = serialization.load_pem_private_key(
self.keyfile, password=None, backend=default_backend()
)
signature = private_key.sign(message.encode(
"utf-8"), padding.PKCS1v15(), hashes.SHA1())
return signature
def _sign_string(self, message, private_key_file=None, private_key_string=None):
if private_key_file:
self.keyfile = open(private_key_file, "rb").read()
elif private_key_string:
self.keyfile = private_key_string.encode("utf-8")
return self.sign_rsa(message)
def _url_base64_encode(self, msg):
msg_base64 = base64.b64encode(msg).decode("utf-8")
msg_base64 = msg_base64.replace("+", "-")
msg_base64 = msg_base64.replace("=", "_")
msg_base64 = msg_base64.replace("/", "~")
return msg_base64
def generate_signature(self, policy, private_key_file=None):
signature = self._sign_string(policy, private_key_file)
encoded_signature = self._url_base64_encode(signature)
return encoded_signature
def create_signed_cookies2(self, url, private_key_file, keypair_id, expires_at):
policy = self.create_custom_policy(url, expires_at)
encoded_policy = self._url_base64_encode(
policy.encode("utf-8"))
signature = self.generate_signature(
policy, private_key_file=private_key_file)
cookies = {
"CloudFront-Policy": encoded_policy,
"CloudFront-Signature": signature,
"CloudFront-Key-Pair-Id": keypair_id,
}
return cookies
def sign_to_cloudfront(object_url, expires_at):
cf = CFSigner()
url = cf.create_signed_url(
url=object_url,
keypair_id="xxxxxxxxxx",
expire_time=expires_at,
private_key_file="xxx.pem",
)
return url
def create_signed_cookies(self, object_url, expires_at):
cookies = self.create_signed_cookies2(
url=object_url,
private_key_file="xxx.pem",
keypair_id="xxxxxxxxxx",
expires_at=expires_at,
)
return cookies
def create_custom_policy(self, url, expires_at):
return (
'{"Statement":[{"Resource":"'
+ url
+ '","Condition":{"DateLessThan":{"AWS:EpochTime":'
+ str(round(expires_at.timestamp()))
+ "}}}]}"
)
def lambda_handler(event, context):
response = event["Records"][0]["cf"]["response"]
headers = response.get("headers", None)
cf = CFSigner()
path = "https://www.example.com/*"
expire = datetime.datetime.now() + datetime.timedelta(days=3)
signed_cookies = cf.create_signed_cookies(path, expire)
headers["set-cookie"] = [{
"key": "set-cookie",
"value": "CloudFront-Policy={signed_cookies.get('CloudFront-Policy')}"
}]
headers["Set-cookie"] = [{
"key": "Set-cookie",
"value": "CloudFront-Signature={signed_cookies.get('CloudFront-Signature')}",
}]
headers["Set-Cookie"] = [{
"key": "Set-Cookie",
"value": "CloudFront-Key-Pair-Id={signed_cookies.get('CloudFront-Key-Pair-Id')}",
}]
print(response)
return response ```
I'm implementing a file uploading functionality that would be used by an angular application. But I am having numerous issues getting it to work. And need help figuring out what I am missing. Here is an overview of the resources in place, and the testing and results I'm getting.
Infrastructure
I have an Amazon S3 bucket created with versioning enabled, encryption enabled and all public access is blocked.
An API gateway with a Lambda function that generates a pre-signed URL. The code is shown below.
def generate_upload_url(self):
try:
conditions = [
{"acl": "private"},
["starts-with", "$Content-Type", ""]
]
fields = {"acl": "private"}
response = self.s3.generate_presigned_post(self.bucket_name,
self.file_path,
Fields=fields,
Conditions=conditions,
ExpiresIn=3600)
except ClientError as e:
logging.error(e)
return None
return response
The bucket name and file path are set as part of the class constructor. In this example the bucket and file path are
def construct_file_names(self):
self.file_path = self.account_number + '-' + self.user_id + '-' + self.experiment_id + '-experiment-data.json'
self.bucket_name = self.account_number + '-' + self.user_id + '-resources'
Testing via Postman
Before implementing it within my angular application. I am testing the upload functionality via Postman.
The response from my API endpoint for the pre-signed URL is shown below
Using these values, I make another API call from Postman and receive the response below
If anybody can see what I might be doing wrong here. I have played around with different fields in the boto3 method, but ultimately, I am getting 403 errors with different messages related to Policy conditions. Any help would be appreciated.
Update 1
I tried to adjust the order of "file" and "acl" but received another error shown below
Update Two - Using signature v4
I updated the pre-signed URL code, shown below
def upload_data(x):
try:
config = Config(
signature_version='s3v4',
)
s3 = boto3.client('s3', "eu-west-1", config=config)
sts = boto3.client('sts', "eu-west-1")
data_upload = UploadData(x["userId"], x["experimentId"], s3, sts)
return data_upload.generate_upload_url()
except Exception as e:
logging.error(e)
When the Lambda function is triggered by the API call, the following is received by Postman
Using the new key values returned from the API, I proceeded to try another test upload. The results are shown below
Once again an error but I think I'm going in the correct direction.
Try moving acl above the file row. Make sure file is at the end.
I finally got this working, so I will post an answer here summarising the steps taken.
Python Code for generating pre-signed URL via boto in eu-west-1
Use signature v4 signing - https://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
def upload_data(x):
try:
config = Config(
signature_version='s3v4',
)
s3 = boto3.client('s3', "eu-west-1", config=config)
sts = boto3.client('sts', "eu-west-1")
data_upload = UploadData(x["userId"], x["experimentId"], s3, sts)
return data_upload.generate_upload_url()
except Exception as e:
logging.error(e)
def generate_upload_url(self):
try:
conditions = [
{"acl": "private"},
["starts-with", "$Content-Type", ""]
]
fields = {"acl": "private"}
response = self.s3.generate_presigned_post(self.bucket_name,
self.file_path,
Fields=fields,
Conditions=conditions,
ExpiresIn=3600)
except ClientError as e:
logging.error(e)
return None
return response
Uploading via Postman
Ensure the order is correct with "file" being last
Ensure "Content-Type" matches what you have in the code to generate the URL. In my case it was "". Once added, the conditions error received went away.
S3 Bucket
Enable a CORS policy if required. I needed one and it is shown below, but this link can help - https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
Upload via Angular
My issue arose during testing with Postman, but I was implementing the functionality to be used by an Angular application. Here is a code snippet of how the upload is done by first calling the API for the pre-signed URL, and then uploading directly.
Summary
Check your S3 infrastructure
Enable CORS if need be
Use sig 4 explicitly in the SDK of choice, if working in an old region
Ensure the form data order is correct
Hope all these pieces help others who are trying to achieve the same. Thanks for all the hints from SO members.
I'm writing a custom S3 bucket policy using AWS that requires canonical ID of the account as a key parameter. I can get the account ID programmatically using cdk core. You may refer the python sample below.
cid = core.Aws.ACCOUNT_ID
Is there any way that we can get the same for canonical-ID.
Update:
I've found a workaround using S3API call. I've added the following code in my CDK stack. May be helpful to someone.
def find_canonical_id(self):
s3_client = boto3.client('s3')
return s3_client.list_buckets()['Owner']['ID']
I found 2 ways to get the canonical ID (boto3):
Method-1 Through List bucket API (also mentioned by author in the update)
This method is recommended by AWS as well.
import boto3
client = boto3.client("s3")
response = client.list_buckets()
canonical_id = response["Owner"]["ID"]
Method-2 Through Get bucket ACL API
import boto3
client = boto3.client("s3")
response = client.get_bucket_acl(
Bucket='sample-bucket' # should be in your acct
)
canonical_id = response["Owner"]["ID"]
I have a s3 bucket with multiple folders. How can I generate s3 presigned URL for a latest object using python boto3 in aws for each folder asked by a user?
You can do something like
import boto3
from botocore.client import Config
import requests
bucket = 'bucket-name'
folder = '/' #you can add folder path here don't forget '/' at last
s3 = boto3.client('s3',config=Config(signature_version='s3v4'))
objs = s3.list_objects(Bucket=bucket, Prefix=folder)['Contents']
latest = max(objs, key=lambda x: x['LastModified'])
print(latest)
print (" Generating pre-signed url...")
url = s3.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': bucket,
'Key': latest['Key']
}
)
print(url)
response = requests.get(url)
print(response.url)
here it will give the latest last modified file from the whole bucket however you can update login and update prefix value as per need.
if you are using Kubernetes POD, VM, or anything you can pass environment variables or use the python dict to store the latest key if required.
If it's a small bucket then recursively list the bucket, with prefix as needed. Sort the results by timestamp, and create the pre-signed URL for the latest.
If it's a very large bucket, this will be very inefficient and you should consider other ways to store the key of the latest file. For example: trigger a Lambda function whenever an object is uploaded and write that object's key into a LATEST item in DynamoDB (or other persistent store).
Is there a way to upload file to AWS S3 with Tags(not add Tags to an existing File/Object in S3). I need to have the file appear in S3 with my Tags , ie in a single API call.
I need this because I use a Lambda Function (that uses these S3 object Tags) is triggered by S3 ObjectCreation
You can inform the Tagging attribute on the put operation.
Here's an example using Boto3:
import boto3
client = boto3.client('s3')
client.put_object(
Bucket='bucket',
Key='key',
Body='bytes',
Tagging='Key1=Value1'
)
As per the docs, the Tagging attribute must be encoded as URL Query parameters. (For example, "Key1=Value1")
Tagging — (String) The tag-set for the object. The tag-set must be
encoded as URL Query parameters. (For example, "Key1=Value1")
EDIT: I only noticed the boto3 tag after a while, so I edited my answer to match boto3's way of doing it accordingly.
Tagging directive is now supported by boto3. You can do the following to add tags if you are using upload_file()
import boto3
from urllib import parse
s3 = boto3.client("s3")
tags = {"key1": "value1", "key2": "value2"}
s3.upload_file(
"file_path",
"bucket",
"key",
ExtraArgs={"Tagging": parse.urlencode(tags)},
)
If you're uploading a file using client.upload_file() or other methods that have the ExtraArgs parameter, you specify the tags differently you need to add tags in a separate request. You can add metadata as follows, but this is not the same thing. For an explanation of the difference, see this SO question:
import boto3
client = boto3.client('s3')
client.upload_file(
Filename=path_to_your_file,
Bucket='bucket',
Key='key',
ExtraArgs={"Metadata": {"mykey": "myvalue"}}
)
There's an example of this on the S3 docs, but you have to know that "metadata" corresponds to tags be aware that metadata is not exactly the same thing as tags though it can function similarly.
s3.upload_file(
"tmp.txt", "bucket-name", "key-name",
ExtraArgs={"Metadata": {"mykey": "myvalue"}}
)