getting malformed policy error from cloudfront - amazon-web-services

I am trying create signed urls for cloudfront. i followed the docs from amazon and was able to configure the cloudfront and s3 using console. but the problem is when i create the signed url(i generated the policy and signature using the linux commands) and prepared the below url
http://1q2w3e4r5t6y7u.cloudfront.net/4/myimage.jpg?Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9kbHIyamJoZGdobTE4LmNsb3VkZnJvbnQubmV0LzQvM2IwYWNiMjYtYTUyOC00MTYwLWE1Y2YtNDEzZWI3NGRkNjcxLmpwZyIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTQwODczOTQwMH0sfX1dfQ0K&Signature=jOv/hpQSO7ChSYQ3w9k2EVh7MUrBxQ2dqbjQNPuEFcWgCKcBT6BufQoMnGWmVLHnIvFr8/ErQC2Q6iAxTyxHoHN7K9FMB2QmLbqaenKaRh8RIcufTmOlsbWXxMpQTwFOquQX7JE/2i4m6OGZBi4Chwse9fQwzHdQ4A6FPr/r8l0rDHLBXF58z8mq3tqJIqiE3joxJoy2K5dY4tzIXWCGZ25L941O8dkpSrmDbmQii8iGiJUGE0bFICpndlEbDVDUkHZsMSPXYt8fjJ2YTIbL58QtaVLMJeXY0kuDq4IUZ8ryp7BZ1Cqj5RKnkToIO4Qe5fNbfl9g-6nydcUbr6q72g__&Key-Pair-Id=xxxxxxxxxxxxxxxxxxxx
But i keep on getting "Malformed url" error. please help!!

Well, it does look malformed... the signature has several / characters, and it shouldn't.
The docs indicate that this pipeline can be used to build the signature:
cat policy | openssl sha1 -sign private-key.pem | openssl base64 | tr '+=/' '-_~'
If you do that, there shouldn't be any / left in your signature -- they would all have been converted to the ~ character.

Related

AWS S3api put-object: unknown options (checksum-crc32)

So I want to upload a file and have AWS perform a specified CRC32 (let's say the CRC is ABCD1234) check after the upload, but i keep getting this error.
usage: aws [options] [ ...]
[parameters] To see help text, you can run:
aws help aws help aws help
Unknown options: --checksumcrc32, ABCD1234
The command I use goes as follows (brackets [] for variables)
aws s3api put-object --bucket [BUCKET_NAME] --checksum-crc32
"ABCD1234" --key [NAME_OF_FILE] --body [DESTINATION_PATH] --profile
[PROFILE_NAME]
Uploads without the --checksum-crc32 work just fine.
Version: aws-cli/2.4.4
Any guesses why I get this error?
Thanks in advance!
The documentation says that the CRC needs to be Base-64 encoded, not hexadecimal:
--checksum-crc32 (string)
This header can be used as a data integrity check to verify that the
data received is the same data that was originally sent. This header
specifies the base64-encoded, 32-bit CRC32 checksum of the object. For
more information, see Checking object integrity in the Amazon S3 User
Guide .
So your ABCD1234 would need to be either q80SNA== or NBLNqw==, depending on whether they expect the 32 bits to be rendered in big-endian or little-endian order, respectively. I didn't see anything in the documentation that says which it is.
The CRC32 doesn't match their calculation. Make sure you're encoding it properly.
You don't need to specify the checksum on the cli, you can have the client calculate it by removing --checksum-crc32 and replacing it with --checksum-algorithm "crc32"
If your goal is data integrity, consider a cryptographically secure algorithm like SHA256, which can also be automatically calculated by the cli.

when calling the ListObjects operation: Missing required header for this request: x-amz-content-sha256

I am trying to copy from one bucket to another bucket in aws with the below command
aws s3 cp s3://bucket1/media s3://bucket2/media --profile xyz --recursive
Returns an error saying
An error occurred (InvalidRequest) when calling the ListObjects operation: Missing required header for this request: x-amz-content-sha256
Completed 1 part(s) with ... file(s) remaining
Check your region. This error is known to happen if your region is not set correctly.
Thanks for your answers , The issue was with permission with the profile used , the credential must have access rights to both the S3 Buckets
I confirm it is an issue of setting a wrong region , However the question now is :
How to know what it is the region of S3 ?
The answer is in the link of any asset hosted there .
So , assume one of your assets which is hosted under bucket-1 has a link :
https://s3.eu-central-2.amazonaws.com/bucket-1/asset.png
This mean your REGION is eu-central-2
Alright , so, run :
aws configure
And change your region accordingly.
I received this error in bash scripts without any sdk.
In my fix, I was missing to add x-amz-content-sha256 and x-amz-date in my cURL request.
Notably
x-amz-date
required by AWS, must contain the timestamp of the request; the accepted format is quite flexible, I’m using ISO8601 basic format.
Example: 20150915T124500Z
x-amz-content-sha256
required by AWS, must be the SHA256 digest of the payload
The request will carry no payload (i.e. the body will be empty). This means that wherever a “payload hash” is required, we will provide an SHA256 hash of an empty string. And that is a constant value of e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855. This concerns the x-amz-content-sha256 header as well.
Detailed explanation: https://czak.pl/2015/09/15/s3-rest-api-with-curl.html
Assuming you have set the following correctly:
AWS credentials
region
permissions of the bucket is set to publicly accessible
IAM policy of the bucket
And assuming you are using boto3 client,
then another thing that could be causing the problem is the signature version in the botocore.config.Config.
import boto3
from botocore import config
AWS_REGION = "us-east-1"
BOTO3_CLIENT_CONFIG = config.Config(
region_name=AWS_REGION,
signature_version="v4",
retries={"max_attempts": 10, "mode": "standard"},
)
s3_client = boto3.client("s3", config=BOTO3_CLIENT_CONFIG)
result = s3_client.list_objects(Bucket="my-bucket-name", Prefix="", Delimiter="/")
Here the signature_version cannot be "v4". It should be "s3v4". Or the signature_version argument should be excluded altogether as by default it is "s3v4".

AWS | Boto3 | RDS |function DownloadDBLogFilePortion |cannot download a log file because it contains binary data |

When I try to download all log files from a RDS instance, in some cases, I found this error in my python output:
An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed.
I manage correctly the pagination and the throttling (using The Marker parameter and the sleep function).
This is my calling:
log_page=request_paginated(rds,DBInstanceIdentifier=id_rds,LogFileName=log,NumberOfLines=1000)
rds-> boto3 resource
And this is the definition of my function:
def request_paginated(rds,**kwargs):
return rds.download_db_log_file_portion(**kwargs)
Like I said, most of time this function works but sometime it returns:
"An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed"
Can you help me please? :)
UPDATE: the problem is a known issue with downloading log files that contain non printable sign. As soon as possible I will try the proposed solution provide by the aws support
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released.
So the solutions is: use the java API
Giuseppe
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released. So the solutions is: use the java API
Giuseppe
http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html
InvalidParameterValue : An invalid or out-of-range value was supplied
for the input parameter.
Invalid parameter in boto means the data pass does not complied. Probably an invalid name that you specified, possible something wrong with your variable id_rds, or maybe your LogFileName, etc. You must complied with the function arguments requirement.
response = client.download_db_log_file_portion(
DBInstanceIdentifier='string',
LogFileName='string',
Marker='string',
NumberOfLines=123
)
(UPDATE)
For example, LogFileName must be te exact file name exist inside RDS instance.
For the logfile , please make sure the log file EXISTS inside the instance. Use this AWS CLI to get a quick check
aws rds describe-db-log-files --db-instance-identifier <my-rds-name>
Do check Marker (string) and NumberOfLines (Integer) as well. Mismatch type or out of range. Skip them since they are not required, then test it later.

How to supply a key on the command line that's not Base 64 encoded

Regarding the AWS S3 tool "sync" and a "customer-provided encryption key", it says here,
--sse-c-key (string) The customer-provided encryption key to use to server-side encrypt the object in S3. If you provide this value,
--sse-c be specfied as well. The key provided should not be base64 encoded.
How does one supply a key on the command line that is not base64 encoded?
If the key is not base64 encoded, then surely some of the key's bytes would not be expressible as characters?
At first glance, this seems like a HUGE oversight in the aws cli. However, buried deep in the CLI documentation is a blurb on how to provide binary data on the command line.
https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-parameters-file.html
(updated link per #Chris's comment)
This did in fact work for me...
aws s3 cp --sse-c AES256 --sse-c-key fileb://key.bin large_file s3://mybucket/
The fileb:// part is the answer

Cannot delete Amazon S3 key that contains bad character

I just began to use S3 recently. I accidentally made a key that contains a bad character, and now I can't list the contents of that folder, nor delete that bad key. (I've since added checks to make sure I don't do this again).
I was using an old "S3" python module from 2008 originally. Now I've switched to boto-2.0, and I still cannot delete it. I did quite a bit of research online, and it seems the problem is I have an invalid XML character, so it seems a problem at the lowest level, and no API has helped so far.
I finally contacted Amazon, and they said to use "s3-curl.pl" from http://aws.amazon.com/code/128. I downloaded it, and here's my key:
<Key>info/[01</Key>
I think I was doing a quick bash for loop over some files at the time, and I have "lscolors" set up, and so this happened.
I tried
./s3curl.pl --id <myID> --key <myKEY> -- -X DELETE https://mybucket.s3.amazonaws.com/info/[01
(and also tried putting the URL in single/double quotes, and also tried to escape the '[').
Without quotes on the URL, it hangs. With quotes, I get "curl: (3) [globbing] error: bad range specification after pos 50". I edited the s3-curl.pl to do curl --globoff and still get this error.
I would appreciate any help.
This solved the issue, just delete the main folder:
aws s3 rm "s3://BUCKET_NAME/folder/folder" --recursive
You can use the s3cmd tool from here. You first need to run
s3cmd fixbucket <bucket name that contains bad file>.
You can then delete the file using
s3cmd del <bucket>/<file>
In my case there were newlines in the key (however that happened..). I was able to fix it with the aws cli like this:
aws cli rm "s3://my_bucket/Icon"$'\r'
I also had versioning enabled, so I also needed to do this, for all the versions (versions ids are visible in the UI when enabling the version view):
aws s3api delete-object --bucket my_bucket --key "Icon"$'\r' --version-id <version_id>
I was in this situation recently, to list the items you can use:
aws s3api list-objects-v2 --bucket my_bucket --encoding-type url
the bad keys will come back url encoded like:
"Key": "%01%C3%B4%C2%B3%C3%8Bu%C2%A5%27%40yr%3E%60%0EQ%14%C3%A5.gif"
spaces became + and I had to change those to %20 and * wasn't encoded I had to replace those with %2A before I was able to delete them.
To actually delete them, I wasn't able to use the aws cli because it would urlencode the already urlencoded key resulting in a 404, so to get around that I manually hit the rest API with the DELETE verb.
I recently encountered this case. I had newline at the end of my bucket. The following command solved the matter.
aws s3 rm "bucket_name"$'\r' --recursive