Why do I get a signature error with this AWS bash deploy script? - amazon-web-services

I am trying to create a bash script to upload files to my s3 bucket. I am having difficulty generating the correct signature.
I get the following error message:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Here is my script:
Thanks for your help!
#!/usr/bin/env bash
#upload to S3 bucket
sourceFilePath="$1"
#file path at S3
folderPathAtS3="packages";
#S3 bucket region
region="eu-central-1"
#S3 bucket name
bucket="my-bucket-name";
#S3 HTTP Resource URL for your file
resource="/${bucket}/${folderPathAtS3}";
#set content type
contentType="gzip";
#get date as RFC 7231 format
dateValue="$(date +'%a, %d %b %Y %H:%M:%S %z')"
acl="x-amz-acl:private"
#String to generate signature
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${acl}\n${resource}";
#S3 key
s3Key="my-key";
#S3 secret
s3Secret="my-secret-code";
#Generate signature, Amazon re-calculates the signature and compares if it matches the one that was contained in your request. That way the secret access key never needs to be transmitted over the network.
signature=$(echo -en "${stringToSign}" | openssl sha1 -hmac ${s3Secret} -binary | base64);
#Curl to make PUT request.
curl -L -X PUT -T "${sourceFilePath}" \
-H "Host: ${bucket}.${region}.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "$acl" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://s3.amazonaws.com/${bucket}/${folderPathAtS3}

Your signature seems fine, but your request is wrong and consequently does not match.
-H "Host: ${bucket}.${region}.amazonaws.com" \ is incorrect.
The correct value is ${bucket}.s3 ${region}.amazonaws.com. You're overlooking the s3. in the hostname... but even if correct, this is still invalidj because your URL https://s3.amazonaws.com/${bucket}/... also includes the bucket, which means your bucket name is being implicitly added to the beginning of the object key because it appears twice.
Additionally, https://s3.amazonaws.com is us-east-1. To connect to the correct region, your URL needs to be one of these variants:
https://${region}.s3.amazonaws.com/${bucket}/${folderPathAtS3}
https://${bucket}.${region}.s3.amazonaws.com/${folderPathAtS3}
https://${bucket}.s3.amazonaws.com/${folderPathAtS3}
Use one of these formats, and eliminate -H "Host: ..." because it will then be redundant.
The last of the 3 URL formats will only start to work after the bucket is more than a few minutes or hours old. S3 creates these automatically but it takes some time.

Related

AWS API Gateway : Signature Mismatch when using query params with AWS IAM Auth

I have an GET based API gateway set up pointing to a Lambda with Lambda Proxy integration enabled
The API has AWS IAM as the auth method.
On my local, I have AWS Auth setup with temp session token
The following works without issue
curl -s GET "https://<ID>.execute-api.us-west-2.amazonaws.com/dev" \
--header "x-amz-security-token: ${SESSION_TOKEN}" \
--user $ACCESS_KEY:$SECRET_KEY \
--aws-sigv4 "aws:amz:us-west-2:execute-api" | jq .
But when I add query params to the url, it fails
curl -s GET "https://<ID>.execute-api.us-west-2.amazonaws.com/dev?a=${v1}&b=${v2}" \
--header "x-amz-security-token: ${SESSION_TOKEN}" \
--user $ACCESS_KEY:$SECRET_KEY \
--aws-sigv4 "aws:amz:us-west-2:execute-api" | jq .
This is the response that I get is
{
"message": "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\n\nThe Canonical String for this request should have been\n'GET\n/dev\nb=def&a=abc\nhost:<ID>.execute-api.us-west-2.amazonaws.com\nx-amz-date:20230104T112344Z\n\nhost;x-amz-date\<date-token>'\n\nThe String-to-Sign should have been\n'AWS4-HMAC-SHA256\n20230104T112344Z\n20230104/us-west-2/execute-api/aws4_request\<token>'\n"
}
Looks like I need to add the query params part to the signature part. How do I do that ? Or is there something else that I'm missing ?

Trying to encyption file upload cloud storage through the curl

Please help me I am using curl file encrypt upload in cloud storage but file is not uploaded , but when I am trying to upload without encryption key and hash key then file gets uploaded successfully please solve this problem.
curl -X POST --data-binary #OBJECT\
-H "Authorization: Bearer ya12.a0ARrdaM8ZPiR_ukSDNO_VPYAJa2W2O67Ds91CKwwLGnWU1DTZF02K237YsXFCqePCi3xSgD0s_cvhIIc_474-Y3h0bDZof69K0snlOAYlwwQw1fBM2QrBUQRKsQOZj1qHILgcZOxptqBxp0e8mx" \
-H "Content-Type: image/jpg" \
-H "x-goog-encryption-algorithm: AES256" \
-H "x-goog-encryption-key: NSOgD4929vFoA8zawwmkuaizAdtdydWGQRuOeZID+GY=" \
-H "x-goog-encryption-key-sha256:ca2f7dac23426d0ba16911be8911f9b71b1fa7f9ecc53ac87100932677d92319" \
"https://storage.googleapis.com/upload/storage/v1/b/bucketName/o?uploadType=media&name=1.jpg"
Note: I am generating encryption key base64 in php and hash the key
$key = random_bytes(32);
$encodedKey = base64_encode($key);
$hash = hash('SHA256',$key);
I am getting this error
error": {
"code": 400,
"message": "Missing an encryption key, or it is not base64 encoded, or it does not meet the required length of the encryption algorithm.", "message": "Missing an encryption key, or it is not base64 encoded, or it does not meet the required length of the encryption algorithm."
Both x-goog-encryption-key and x-goog-encryption-key should be base64 encoded.
Based on the code in PHP you probably use the encryption key that is base64 encoded (that is OK), but the hash is not base64 encoded.
Try encode the hash at the end of the PHP, with:
base64-hash=base64_encode(hash)
More detail can be found here: https://cloud.google.com/storage/docs/encryption/customer-supplied-keys
Quote:
Include the following HTTP headers in your JSON or XML request:
x-goog-encryption-algorithm string The encryption algorithm to use. You must use the value AES256.
x-goog-encryption-key string An RFC 4648 Base64-encoded string of your AES-256 encryption key.
x-goog-encryption-key-sha256 string An RFC 4648 Base64-encoded string of the SHA256 hash of your encryption key.

How do I set up my API to require an API key with amazon API Gateway?

I have been following advice on this post I've created an API key on AWS and set my POST method to require an API key.
I have also setup a usage plan and linked that API key to it.
My API key is enabled
When I have been testing requests with postman, my request still goes through without any additional headers.
I was expecting no requests to go through unless I had included a header in my request like this "x-api-key":"my_api_key"
Do I need to change the endpoint I send requests to in postman for them to go through API Gateway?
If you need to enable API key for each method then needs to be enabled API key required true for each method.
Go to resources--> select your resource and method, go to Method Request and set "API Key Required" to true.
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-use-postman-to-call-api.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/welcome.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-api-key-source.html
If you want, I've made the following script to enable the API key on every method for certain API. It requires the jq tool for advanced JSON parsing.
You can find the script to enable the API key for all methods of an API Gateway API on this gist.
#!/bin/bash
api_gateway_method_enable_api_key() {
local api_id=$1
local method_id=$2
local method=$3
aws --profile "$profile" --region "$region" \
apigateway update-method \
--rest-api-id "$api_id" \
--resource-id "$method_id" \
--http-method "$method" \
--patch-operations op="replace",path="/apiKeyRequired",value="true"
}
# change this to 1 in order to execute the update
do_update=0
profile=your_profile
region=us-east-1
id=your_api_id
tmp_file="/tmp/list_of_endpoint_and_methods.json"
aws --profile $profile --region $region \
apigateway get-resources \
--rest-api-id $id \
--query 'items[?resourceMethods].{p:path,id:id,m:resourceMethods}' >"$tmp_file"
while read -r line; do
path=$(jq -r '.p' <<<"$line")
method_id=$(jq -r '.id' <<<"$line")
echo "$path"
# do not update OPTIONS method
for method in GET POST PUT DELETE; do
has_method=$(jq -r ".m.$method" <<<"$line")
if [ "$has_method" != "null" ]; then
if [ $do_update -eq 1 ]; then
api_gateway_method_enable_api_key "$id" "$method_id" "$method"
echo " $method method changed"
else
echo " $method method will be changed"
fi
fi
done
done <<<"$(jq -c '.[]' "$tmp_file")"

Uploading to Amazon S3 using signed S3 url in Grails

Curl successfully uploads the file to S3 using a signed url:
curl -v -k -X PUT \
-H "x-amz-server-side-encryption: AES256" \
-H "Content-Type: application/pdf" \
-T "__tests__/resources/test.pdf" \
"http://mybucket.s3.amazonaws.com/test.pdf?AWSAccessKeyId=IDKEY&Expires=1489458783&Signature=SIGNATURE
I've tried replicating this in Grails using the REST client plugin:
String url = "http://mybucket.s3.amazonaws.com/test.pdf?AWSAccessKeyId=IDKEY&Expires=1489458783&Signature=SIGNATURE"
RestResponse resp = rest.put(url){
header "x-amz-server-side-encryption", "AES256"
header "Content-Type", "application/pdf"
body pdf
}
But Amazon rejects the upload, saying the arguments are incorrect...probably due to the pdf being sent as a "body" parameter. Any ideas?
Instead of using a rest client to upload it would be simpler to use the AWS Java SDK in your Grails app.
See an example here of using a pre-signed url to upload http://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObjectJavaSDK.html

Uploading to Amazon S3 using cURL/libcurl

I am currently trying to develop an application to upload files to an Amazon S3 bucket using cURL and c++. After carefully reading the S3 developers guide I have started implementing my application using cURL and forming the Header as described by the Developers guide and after lots of trials and errors to determine the best way to create the S3 signature, I am now facing a 501 error. The received header suggests that the method I'm using is not implemented. I am not sure where I'm wrong but here is the HTTP header that I'm sending to amazon:
PUT /test1.txt HTTP/1.1
Accept: */*
Transfer-Encoding: chunked
Content-Type: text/plain
Content-Length: 29
Host: [BucketName].s3.amazonaws.com
Date: [Date]
Authorization: AWS [Access Key ID]:[Signature]
Expect: 100-continue
I have truncated the Bucket Name, Access Key ID and Signature for security reasons.
I am not sure what I'm doing wrong but I think that the error is generating because of the Accept and Transfer-Encoding Fields (Not Really Sure). So can anyone tell me what I'm doing wrong or why I'm getting a 501.
The game changed significantly since the question was asked, the simple authorization headers no longer apply, yet it is still feasible to perform with a UNIX shell script, as follows.
Ensure 'openssl' and 'curl' are available at the command line. TIP: double check the openSSL argument syntax as these may vary with different versions of the tool; e.g. openssl sha -sha256 ... versus openssl sha256 ...
Beware, a single extra newline or space character, else the use of CRLF in place of the NewLine char alone would defeat the signature. Note too that you may want to use content types possibly with encodings to prevent any data transformation through the communication media. You may then have to adjust the list of signed headers at several places; please refer to AMAZON S3 API docs for the numerous conventions to keep enforced like alphabetical-lowercase ordering of header info used in hash calculations at several (redundant) places.
# BERHAUZ Nov 2019 - curl script for file upload to Amazon S3 Buckets
test -n "$1" || {
echo "usage: $0 <myFileToSend.txt>"
echo "... missing argument file ..."
exit
}
yyyymmdd=`date +%Y%m%d`
isoDate=`date --utc +%Y%m%dT%H%M%SZ`
# EDIT the next 4 variables to match your account
s3Bucket="myBucket.name.here"
bucketLocation="eu-central-1"
s3AccessKey="THISISMYACCESSKEY123"
s3SecretKey="ThisIsMySecretKeyABCD1234efgh5678"
#endpoint="${s3Bucket}.s3-${bucketLocation}.amazonaws.com"
endpoint="s3-${bucketLocation}.amazonaws.com"
fileName="$1"
contentLength=`cat ${fileName} | wc -c`
contentHash=`openssl sha256 -hex ${fileName} | sed 's/.* //'`
canonicalRequest="PUT\n/${s3Bucket}/${fileName}\n\ncontent-length:${contentLength}\nhost:${endpoint}\nx-amz-content-sha256:${contentHash}\nx-amz-date:${isoDate}\n\ncontent-length;host;x-amz-content-sha256;x-amz-date\n${contentHash}"
canonicalRequestHash=`echo -en ${canonicalRequest} | openssl sha256 -hex | sed 's/.* //'`
stringToSign="AWS4-HMAC-SHA256\n${isoDate}\n${yyyymmdd}/${bucketLocation}/s3/aws4_request\n${canonicalRequestHash}"
echo "----------------- canonicalRequest --------------------"
echo -e ${canonicalRequest}
echo "----------------- stringToSign --------------------"
echo -e ${stringToSign}
echo "-------------------------------------------------------"
# calculate the signing key
DateKey=`echo -n "${yyyymmdd}" | openssl sha256 -hex -hmac "AWS4${s3SecretKey}" | sed 's/.* //'`
DateRegionKey=`echo -n "${bucketLocation}" | openssl sha256 -hex -mac HMAC -macopt hexkey:${DateKey} | sed 's/.* //'`
DateRegionServiceKey=`echo -n "s3" | openssl sha256 -hex -mac HMAC -macopt hexkey:${DateRegionKey} | sed 's/.* //'`
SigningKey=`echo -n "aws4_request" | openssl sha256 -hex -mac HMAC -macopt hexkey:${DateRegionServiceKey} | sed 's/.* //'`
# then, once more a HMAC for the signature
signature=`echo -en ${stringToSign} | openssl sha256 -hex -mac HMAC -macopt hexkey:${SigningKey} | sed 's/.* //'`
authoriz="Authorization: AWS4-HMAC-SHA256 Credential=${s3AccessKey}/${yyyymmdd}/${bucketLocation}/s3/aws4_request, SignedHeaders=content-length;host;x-amz-content-sha256;x-amz-date, Signature=${signature}"
curl -v -X PUT -T "${fileName}" \
-H "Host: ${endpoint}" \
-H "Content-Length: ${contentLength}" \
-H "x-amz-date: ${isoDate}" \
-H "x-amz-content-sha256: ${contentHash}" \
-H "${authoriz}" \
http://${endpoint}/${s3Bucket}/${fileName}
I must acknowledge that, for someone a bit involved in cryptography like me, the Amazon signature scheme deserves numerous critics:
there's much redundancy in the information being signed,
the 5 step HMAC cascade is almost inverting semantics between key seed and data where 1 step would suffice with proper usage and same security
the last 12 characters of the secret key are useless here, because the significant key length of a SHA256 HMAC is ... 256 bits, hence 32 bytes, of which the first 4 always start with "AWS4" for just no purpose.
overall AWS S3 API re-invents standards where a S/MIME payload would have done
Apologize for the critics, I was not able to resist. Yet acknowledge: it is working reliably, useful for many companies, and an interesting service with a rich API.
You could execute a bash file. Here is an example upload.sh script which you could just run as: sh upload.sh yourfile
#!/bin/bash
file=$1
bucket=YOUR_BUCKET
resource="/${bucket}/${file}"
contentType="application/x-itunes-ipa"
dateValue=`date -R`
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
s3Key=YOUR_KEY_HERE
s3Secret=YOUR_SECRET
echo "SENDING TO S3"
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -vv -X PUT -T "${file}" \
-H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${bucket}.s3.amazonaws.com/${file}
more on: http://www.jamesransom.net/?p=58
http://www.jamesransom.net/?p=58
Solved: was missing an CURLOPT for the file size in my code and now everything is working perfectly