We have an Amazon SES setup that works well and sends thousands of emails a day via SMTP. Trying to follow a best practice of "rotating" access keys we went to
https://console.aws.amazon.com/iam/home and creating a new access key for the exact same user which is used to send emails. The new key is supposedly active but when trying to email with the access keys, we keep getting
535 Authentication Credentials Invalid
Switching to the old access keys works well and emails are sent. Tried a couple of times to delete the new access keys and create others. Same machine, same software. We have proper copy+paste skills to ensure we're using the same ID/Password provided in the CSV coming from Amazon. Here the dialog from Amazon:
So what's going on? Is there a time limit till the new key becomes active? Is there some other hidden limitation somewhere?
You are confusing the SMTP credentials with access_key and secret. They are different.
access_key/secret --> Use in SDK and CLI
SMTP credentials --> Use to configure SES SMTP
You are creating a new access_key/secret and using it as SMTP credentials
Instead you create a new SMTP credentials and use it
Key rotation is different from SMTP credential rotation
No need to create a new user
It is likely you are using the SMTP credentials that does not change even if you generate another set of access_key/secret. In your case it looks like you are using the SMTP server and not the SDK. So generating a new set of access_key/secret has no effect on SMTP credentials.
If you want to create a new set of SMTP credentials, go to AWS SES dashboard and create SMTP credentials.
For more information: Obtaining Your Amazon SES SMTP Credentials
Yes, there's a hidden limitation in the way AWS approaches the SMTP password for SES. And they are using a very confusing way of handling these credentials.
The answer from helloV is on the right track, but it's not entirely correct. Both AWS and his answer tell us that Access_key/Secret_key and SES SMTP credentials are different things, but:
If you create fresh SES SMTP credentials, it creates a new IAM User with an Access Key/Secret Key pair
The Access Key Id is the same as the username for SMTP
If you delete or disable this key, you lose your SMTP access. So they are clearly very related.
The password for SMTP is derived from the Secret Key
It turns out that a new access_key/secret_key pair on an existing IAM user, can be used for SMTP, and therefore keys can be rotated without creating new users.
AWS converts the Secret Access Key to generate the SMTP password, as they explain in this documentation page:
The following pseudocode shows the algorithm that converts an AWS Secret Access Key to an Amazon SES SMTP password.
key = AWS Secret Access Key;
message = "SendRawEmail";
versionInBytes = 0x02;
signatureInBytes = HmacSha256(message, key);
signatureAndVer = Concatenate(versionInBytes, signatureInBytes);
smtpPassword = Base64(signatureAndVer);
So using the Secret Access key, the SMTP password can be generated
With bash and openssl installed, the following command will output the password for use in SMTP:
(echo -en "\x02"; echo -n 'SendRawEmail' \
| openssl dgst -sha256 -hmac $AWS_SECRET_ACCESS_KEY -binary) \
| openssl enc -base64
Just replace $AWS_SECRET_ACCESS_KEY with your key, or set the variable beforehand
Since, both Secret Key and SMTP Password are in different format, you need to convert Secret Key to SMTP Password using algorithm provided by AWS.
You can find it here:
https://aws.amazon.com/premiumsupport/knowledge-center/ses-rotate-smtp-access-keys/
Here is a working piece of code for transformning your secret key to smtp password using bash :
#!/usr/bin/env bash
# Convert AWS Secret Access Key to an Amazon SES SMTP password
# using the following pseudocode:
#
# date = "11111111";
# service = "ses";
# terminal = "aws4_request";
# message = "SendRawEmail";
# version = 0x04;
#
# kDate = HmacSha256(date, "AWS4" + key);
# kRegion = HmacSha256(region, kDate);
# kService = HmacSha256(service, kRegion);
# kTerminal = HmacSha256(terminal, kService);
# kMessage = HmacSha256(message, kTerminal);
# signatureAndVersion = Concatenate(version, kMessage);
# smtpPassword = Base64(signatureAndVersion);
#
# Usage:
# chmod u+x aws-ses-smtp-password.sh
# ./aws-ses-smtp-password.sh secret-key-here
# See: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/smtp-credentials.html
#
if [ "$#" -ne 1 ]; then
echo "Usage: ./aws-ses-smtp-password.sh secret-key-here"
exit 1
fi
KEY="${1}"
DATE="11111111"
REGION="eu-west-1"
SERVICE="ses"
TERMINAL="aws4_request"
MESSAGE="SendRawEmail"
VERSION="4"
VERSION_IN_BYTES=$(printf \\$(printf '%03o' "${VERSION}"));
#SIGNATURE_IN_BYTES=$(echo -n "${MESSAGE}" | openssl dgst -sha256 -hmac "${KEY}" -binary);
SIGNATURE_IN_BYTES=$(echo -n "${DATE}" | openssl dgst -sha256 -mac HMAC -macopt "key:AWS4${KEY}" | sed 's/^.* //');
SIGNATURE_IN_BYTES=$(echo -n "${REGION}" | openssl dgst -sha256 -mac HMAC -macopt "hexkey:${SIGNATURE_IN_BYTES}" | sed 's/^.* //');
SIGNATURE_IN_BYTES=$(echo -n "${SERVICE}" | openssl dgst -sha256 -mac HMAC -macopt "hexkey:${SIGNATURE_IN_BYTES}" | sed 's/^.* //');
SIGNATURE_IN_BYTES=$(echo -n "${TERMINAL}" | openssl dgst -sha256 -mac HMAC -macopt "hexkey:${SIGNATURE_IN_BYTES}" | sed 's/^.* //');
SIGNATURE_IN_BYTES=$(echo -n "${MESSAGE}" | openssl dgst -sha256 -mac HMAC -macopt "hexkey:${SIGNATURE_IN_BYTES}" -binary | sed 's/^.* //');
SIGNATURE_AND_VERSION="${VERSION_IN_BYTES}${SIGNATURE_IN_BYTES}"
SMTP_PASSWORD=$(echo -n "${SIGNATURE_AND_VERSION}" | base64);
echo "${SMTP_PASSWORD}"
Related
I am trying to create a bash script to upload files to my s3 bucket. I am having difficulty generating the correct signature.
I get the following error message:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Here is my script:
Thanks for your help!
#!/usr/bin/env bash
#upload to S3 bucket
sourceFilePath="$1"
#file path at S3
folderPathAtS3="packages";
#S3 bucket region
region="eu-central-1"
#S3 bucket name
bucket="my-bucket-name";
#S3 HTTP Resource URL for your file
resource="/${bucket}/${folderPathAtS3}";
#set content type
contentType="gzip";
#get date as RFC 7231 format
dateValue="$(date +'%a, %d %b %Y %H:%M:%S %z')"
acl="x-amz-acl:private"
#String to generate signature
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${acl}\n${resource}";
#S3 key
s3Key="my-key";
#S3 secret
s3Secret="my-secret-code";
#Generate signature, Amazon re-calculates the signature and compares if it matches the one that was contained in your request. That way the secret access key never needs to be transmitted over the network.
signature=$(echo -en "${stringToSign}" | openssl sha1 -hmac ${s3Secret} -binary | base64);
#Curl to make PUT request.
curl -L -X PUT -T "${sourceFilePath}" \
-H "Host: ${bucket}.${region}.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "$acl" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://s3.amazonaws.com/${bucket}/${folderPathAtS3}
Your signature seems fine, but your request is wrong and consequently does not match.
-H "Host: ${bucket}.${region}.amazonaws.com" \ is incorrect.
The correct value is ${bucket}.s3 ${region}.amazonaws.com. You're overlooking the s3. in the hostname... but even if correct, this is still invalidj because your URL https://s3.amazonaws.com/${bucket}/... also includes the bucket, which means your bucket name is being implicitly added to the beginning of the object key because it appears twice.
Additionally, https://s3.amazonaws.com is us-east-1. To connect to the correct region, your URL needs to be one of these variants:
https://${region}.s3.amazonaws.com/${bucket}/${folderPathAtS3}
https://${bucket}.${region}.s3.amazonaws.com/${folderPathAtS3}
https://${bucket}.s3.amazonaws.com/${folderPathAtS3}
Use one of these formats, and eliminate -H "Host: ..." because it will then be redundant.
The last of the 3 URL formats will only start to work after the bucket is more than a few minutes or hours old. S3 creates these automatically but it takes some time.
I've been trying to set up CloudFront with signed cookies using elixir. From what I learned the signature must use sha1 encoding. According to AWS in the terminal that would look like this:
cat policy.json | openssl sha1 -sign CloudFront_Key.pem |base64 |tr '+=/' '-_~'
I can't figure out how to do that in elixir. I've been looking into this dependency and tried using :sha1 instead in it's sign method. I did read through here and did see sha, not sha1. Is that how erlang calls it? (I have close to no knowledge about erlang). Tried :sha but I do not get the same key as in the terminal and it does not seem to work. I also noticed that base64 usually adds a few extra chars at the end than the terminal. Not sure what to do.. Resort to try using System.cmd ?
This is the AWS doc I'm following: create & verify signed cookies and create signature for signed cookie
You can use :public_key.sign/3 for this.
For reference,
$ cat pk.pem
-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAknEbD26dg4w2JlaN2pmtz7sEBSEjFcfsgcUuV100L60Lz36s
EDySjiTMbhfhKRa46gBvGSEzb8G1l4iD1O5M5mfzEVg/shSOgwVfyvq5XShJBaz6
PcKJ4hsLh/aUIkvnfYWg5zdcPj5FBYT0I+jLnWvNevsolfGwN4rY5S/PuBCUwtVC
m1L3p5bVHvR33tjI4qenHkuRJUnJh4TMLCOTCGHdmZ+yyWJ2rLcYq7yciI0WlRh5
cCI1eYEehUsbcifOyazKNOAmO/9DzXZlo9S0TSYjxrG626mt+j9yD473t/FSfUeg
6Hn67wfGBWAk0HezJ3QGRl9jBAeYhSLbaX7DHQIDAQABAoIBAHo+Blu8d6oe+fjI
2cM389pq/7EUd0gwSmINamCtQenmZux/jjxDhAc5+piQQHlfKV7Um+j7SQeqSN7E
q1+syO6wqTu6UflipZADhXJYFzIHdeVR/tZdNWJUNyz5DbEPcZ7bVHSORucCbfVs
hawQISA4pB9b1wZL6VCEDAhM//ViRgBA3k6tBu5jNi+I1WXE9nrat4DSDV3uklfo
f5KfHLhbHgrl7R7jnAyWBya6O6Pw0r/3FeWrjo6W0wht5BprX4HSgJvpWtqjhodA
iucWjmWV7Gt++HBRX0NEQozajQ7fdfnCOpSD2g8QzVw/5whRBidYngJWLBNt1YQV
UMi7toECgYEAzw4i+m0PnqiExMOSLsvPcEQu0wScOe3vwXTBOcfOS0Uds0kWzZP2
BHQL7IATtN+OAMdtFi1xyN+auikTsZpy1hezUtamfeo75btFhgSrtRXQki5o3pX8
HgDh+31+Ox1RM/Qhm4DSPQ/YImw+HQD+lK9sw3yNZlmPmXwyCARxkCECgYEAtQ75
M5HcBG+zPbPHc4mrOgtQzaUTEs1onu/zDCuFZwYtyBB0RKSLLmacgh/Qcdu13Lli
fdTqego5asKztQfsB88aqkYtdY51dnm+H2sxQ2ef+1D2gbV+sdrsM7gUuVvxBMwv
itFizQ1rgZn/Vee4zfFXEoURjkKfTKT27Hn7A30CgYEAx4ugOiiRPR67lcXFREQ3
jsKnPcbbqRieT5rt/XmKXxAlJ3vw9f76wh/0veBRHae1exq3DwCNAEI/I9oimK94
rMv6joM/wWnUf/qTbi1iLgrwD3Gar6lsaJ4BLBYtaVs/vwowuWTVOPPkIIig8+LZ
dwH5mAyZWWJG+myu6vsdVwECgYAJqT/g4ZKU5gTxcOteneT2Fu572qgW48EGYhVc
++GFas38k+wwUXtfwXfudZYgzTF6EqZPwpG0a2E+8h62tTKCBCoPFemNEUnxRXPA
p26cgyYFOf+9UhrtkJnz9Imejmpg8ChFRwD3ohSveLEoO1IgIxWbVmBmb+WiKFdI
rQWY3QKBgA6Gw/8KB7LxqR3N9/1xgW5b02WVhG7gvFNlSYZMmdd8H3w8MdXCdk22
kJBfrIfkDS6nbF6w4+8q309NPwOqdkt2QNPq6Il1ugWuxFMiNEGHjjT21PrsgEJS
q6K6c+znrnI7O/wijdvEC2n+q+S9h8yv1bT6QfC1Vx88SC9GCBuq
-----END RSA PRIVATE KEY-----
$ cat policy.json
{"Statement":[{"Resource":"https://mycloudfront.net/a.png","Condition":{"DateLessThan":{"AWS:EpochTime":1512086400}}}]}
First, read the key:
iex(1)> key = File.read!("pk.pem") |> :public_key.pem_decode |> hd |> :public_key.pem_entry_decode
...
Then sign the data using :public_key.sign/3:
iex(2)> File.read!("policy.json") |> :public_key.sign(:sha, key) |> Base.encode64
"QjLmx3LASRb1zt9eW/EMywGMXB1SwX/0JrTnLOFulYjcRJ1dpacUZBB/AYI1zwaXPEQTgQ8crNDFgje6fqbLKoNwgcpE9mOK/RdDKi963ztJnD6EmtM60YbROSpjQ/LDupEYgipPNZbjCnRCJcqDX43BadbVR75G3B5mFmAwtRSPdslJ5irVnt9PjoDMdi9DYe1wGhgQkoym1tiKEyaTrH5lyrw+KPdAi1tpzuZ60ZEcQFJJbKqYYdA0SslbUFL71mdLLkQ9xz95JPNpsSY3ZJyJsKpRGFJuaL1aMsdNLxlLD91PpNW15FitBpBnAwuiiEfPrwU14zIxsfFszaM6KA=="
The output is identical to openssl:
$ cat policy.json | openssl sha1 -sign pk.pem | base64
QjLmx3LASRb1zt9eW/EMywGMXB1SwX/0JrTnLOFulYjcRJ1dpacUZBB/AYI1zwaXPEQTgQ8crNDFgje6fqbLKoNwgcpE9mOK/RdDKi963ztJnD6EmtM60YbROSpjQ/LDupEYgipPNZbjCnRCJcqDX43BadbVR75G3B5mFmAwtRSPdslJ5irVnt9PjoDMdi9DYe1wGhgQkoym1tiKEyaTrH5lyrw+KPdAi1tpzuZ60ZEcQFJJbKqYYdA0SslbUFL71mdLLkQ9xz95JPNpsSY3ZJyJsKpRGFJuaL1aMsdNLxlLD91PpNW15FitBpBnAwuiiEfPrwU14zIxsfFszaM6KA==
I have been asked to do some system admin and to move a legacy PHP web application to an Amazon EC2 instance running Debian. I have done this, and emails are successfully being sent from postfix.
Concern was expressed by the previous system admin that the server was not using an email relay, and a request to use SES seemed straight forward. I have implemented a mail relay using Mailgun from a Rackspace instance, and though not trivial, I got this done in a couple of hours.
I have not found the SES process quite so simple, and I suspect this is because I am unfamiliar with using certificates.
Initially I set up the service using the instructions here
http://docs.aws.amazon.com/ses/latest/DeveloperGuide/postfix.html
Elastic IP set up for server
Credentials created for SMTP server
Created IAM user and got a username and password for SMTP at
email-smtp.us-west-2.amazonaws.com
I created an /etc/postfix/sasl_passwd file with
[email-smtp.us-west-2.amazonaws.com]:25 USERNAME:PASSWORD
I then ran
postmap hash:/etc/postfix/sasl_passwd
to create the sasl_passwd.db
/etc/postfix/master.cf did not have smtp_fallback_relay in it
I created a certificate by installing apt-get install sasl2-bin and
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout server.key -out server.crt
and pointing postfix to this in my main.cf (at the end of this post).
I am using sendmail to send an email via Python
SENDMAIL = "/usr/sbin/sendmail" # sendmail location
FROM = "andy#travelinsurancequotes.com.au"
#TO = ["kirstie#travelinsurancequotes.com.au", "jason#slatescience.com"]
TO = ["jason#slatescience.com"]
SUBJECT = "Artog SMTP server is working!"
TEXT = "Sending emails on the TIQ webserver is working"
# Prepare actual message
message = """\
From: %s
To: %s
Subject: %s
%s
""" % (FROM, ", ".join(TO), SUBJECT, TEXT)
# Send the mail
import os
p = os.popen("%s -f %s -t -i" % (SENDMAIL, FROM), "w")
p.write(message)
status = p.close()
if status:
print "Sendmail exit status", stat
but I keep getting a time out error on sending:
Feb 26 03:18:19 lamp postfix/error[23414]: 5DE3240508: to=<jason#slatescience.com>, relay=none, delay=0.02, delays=0.02/0/0/0, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to email-smtp.us-west-2.amazonaws.com[54.187.123.10]:25: Connection timed out
I can connect via port 25
root#lamp /home/www# telnet email-smtp.us-west-2.amazonaws.com 25
Trying 54.149.142.243...
Connected to ses-smtp-us-west-2-prod-14896026.us-west-2.elb.amazonaws.com.
Escape character is '^]'.
220 email-smtp.amazonaws.com ESMTP
My main.cf file is
myhostname = travelinsurancequotes.com.au
mydomain = travelinsurancequotes.com.au
inet_interfaces = all
mynetworks_style = host
local_destination_recipient_limit = 300
local_destination_concurrency_limit = 5
recipient_delimiter=+
smtpd_banner = $myhostname
smtpd_sasl_auth_enable = yes
smtp_sasl_mechanism_filter = plain
smtpd_sasl_local_domain = $myhostname
broken_sasl_auth_clients = yes
smtpd_helo_required = yes
smtp_use_tls = yes
smtpd_use_tls = yes
smtp_tls_note_starttls_offer = yes
smtpd_tls_key_file = /etc/postfix/sslcerts/server.key
smtpd_tls_cert_file = /etc/postfix/sslcerts/server.crt
smtpd_tls_loglevel = 1
smtpd_tls_received_header = yes
smtpd_tls_session_cache_timeout = 3600s
tls_random_source = dev:/dev/urandom
relayhost = [email-smtp.us-west-2.amazonaws.com]:25
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_use_tls = yes
smtp_tls_security_level = encrypt
smtp_tls_note_starttls_offer = yes
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
AWS EC2 has some sort of limit on mail being sent ..
I had that error, and Amazon Support told me to fill this form out to remove the limit.
https://aws.amazon.com/forms/ec2-email-limit-rdns-request
I hope this helps
Amazon has instructions for postfix and sendmail, but not OpenSMTPD, so adding them here.
Tested with OpenBSD 5.8
Verify your domain and a sender in AWS SES console. Save your SMTP Settings.
Set up the SMTP authentication details in the mail secrets database (replacing $smtpUsername:$smtpPassword with the values from step 1)
# touch /etc/mail/secrets
# chmod 640 /etc/mail/secrets
# chown root:_smtpd /etc/mail/secrets
# echo "ses $smtpUsername:$smtpPassword" >> /etc/mail/secrets
# makemap /etc/mail/secrets
Configure OpenSMTPD:
# nano /etc/mail/smtpd.conf
listen on lo0
table aliases db:/etc/mail/aliases.db
table secrets db:/etc/mail/secrets.db
accept for local alias <aliases> deliver to mbox
accept from local for any relay via tls+auth://ses#email-smtp.us-east-1.amazonaws.com auth <secrets>
Restart OpenSMTPD:
# rcctl restart smtpd
Test it:
# sendmail -v -f verified-sender#verified-domain.com to#example.com
Subject: test subject
test body
^D
Errors?
watch your line-breaks in smtpd.conf
# smtpd -n to check for syntax errors in smtpd.conf
Try port 587 if your machine is blocking port 25 (add :587 to end of aws url in smtpd.conf)
I am currently trying to develop an application to upload files to an Amazon S3 bucket using cURL and c++. After carefully reading the S3 developers guide I have started implementing my application using cURL and forming the Header as described by the Developers guide and after lots of trials and errors to determine the best way to create the S3 signature, I am now facing a 501 error. The received header suggests that the method I'm using is not implemented. I am not sure where I'm wrong but here is the HTTP header that I'm sending to amazon:
PUT /test1.txt HTTP/1.1
Accept: */*
Transfer-Encoding: chunked
Content-Type: text/plain
Content-Length: 29
Host: [BucketName].s3.amazonaws.com
Date: [Date]
Authorization: AWS [Access Key ID]:[Signature]
Expect: 100-continue
I have truncated the Bucket Name, Access Key ID and Signature for security reasons.
I am not sure what I'm doing wrong but I think that the error is generating because of the Accept and Transfer-Encoding Fields (Not Really Sure). So can anyone tell me what I'm doing wrong or why I'm getting a 501.
The game changed significantly since the question was asked, the simple authorization headers no longer apply, yet it is still feasible to perform with a UNIX shell script, as follows.
Ensure 'openssl' and 'curl' are available at the command line. TIP: double check the openSSL argument syntax as these may vary with different versions of the tool; e.g. openssl sha -sha256 ... versus openssl sha256 ...
Beware, a single extra newline or space character, else the use of CRLF in place of the NewLine char alone would defeat the signature. Note too that you may want to use content types possibly with encodings to prevent any data transformation through the communication media. You may then have to adjust the list of signed headers at several places; please refer to AMAZON S3 API docs for the numerous conventions to keep enforced like alphabetical-lowercase ordering of header info used in hash calculations at several (redundant) places.
# BERHAUZ Nov 2019 - curl script for file upload to Amazon S3 Buckets
test -n "$1" || {
echo "usage: $0 <myFileToSend.txt>"
echo "... missing argument file ..."
exit
}
yyyymmdd=`date +%Y%m%d`
isoDate=`date --utc +%Y%m%dT%H%M%SZ`
# EDIT the next 4 variables to match your account
s3Bucket="myBucket.name.here"
bucketLocation="eu-central-1"
s3AccessKey="THISISMYACCESSKEY123"
s3SecretKey="ThisIsMySecretKeyABCD1234efgh5678"
#endpoint="${s3Bucket}.s3-${bucketLocation}.amazonaws.com"
endpoint="s3-${bucketLocation}.amazonaws.com"
fileName="$1"
contentLength=`cat ${fileName} | wc -c`
contentHash=`openssl sha256 -hex ${fileName} | sed 's/.* //'`
canonicalRequest="PUT\n/${s3Bucket}/${fileName}\n\ncontent-length:${contentLength}\nhost:${endpoint}\nx-amz-content-sha256:${contentHash}\nx-amz-date:${isoDate}\n\ncontent-length;host;x-amz-content-sha256;x-amz-date\n${contentHash}"
canonicalRequestHash=`echo -en ${canonicalRequest} | openssl sha256 -hex | sed 's/.* //'`
stringToSign="AWS4-HMAC-SHA256\n${isoDate}\n${yyyymmdd}/${bucketLocation}/s3/aws4_request\n${canonicalRequestHash}"
echo "----------------- canonicalRequest --------------------"
echo -e ${canonicalRequest}
echo "----------------- stringToSign --------------------"
echo -e ${stringToSign}
echo "-------------------------------------------------------"
# calculate the signing key
DateKey=`echo -n "${yyyymmdd}" | openssl sha256 -hex -hmac "AWS4${s3SecretKey}" | sed 's/.* //'`
DateRegionKey=`echo -n "${bucketLocation}" | openssl sha256 -hex -mac HMAC -macopt hexkey:${DateKey} | sed 's/.* //'`
DateRegionServiceKey=`echo -n "s3" | openssl sha256 -hex -mac HMAC -macopt hexkey:${DateRegionKey} | sed 's/.* //'`
SigningKey=`echo -n "aws4_request" | openssl sha256 -hex -mac HMAC -macopt hexkey:${DateRegionServiceKey} | sed 's/.* //'`
# then, once more a HMAC for the signature
signature=`echo -en ${stringToSign} | openssl sha256 -hex -mac HMAC -macopt hexkey:${SigningKey} | sed 's/.* //'`
authoriz="Authorization: AWS4-HMAC-SHA256 Credential=${s3AccessKey}/${yyyymmdd}/${bucketLocation}/s3/aws4_request, SignedHeaders=content-length;host;x-amz-content-sha256;x-amz-date, Signature=${signature}"
curl -v -X PUT -T "${fileName}" \
-H "Host: ${endpoint}" \
-H "Content-Length: ${contentLength}" \
-H "x-amz-date: ${isoDate}" \
-H "x-amz-content-sha256: ${contentHash}" \
-H "${authoriz}" \
http://${endpoint}/${s3Bucket}/${fileName}
I must acknowledge that, for someone a bit involved in cryptography like me, the Amazon signature scheme deserves numerous critics:
there's much redundancy in the information being signed,
the 5 step HMAC cascade is almost inverting semantics between key seed and data where 1 step would suffice with proper usage and same security
the last 12 characters of the secret key are useless here, because the significant key length of a SHA256 HMAC is ... 256 bits, hence 32 bytes, of which the first 4 always start with "AWS4" for just no purpose.
overall AWS S3 API re-invents standards where a S/MIME payload would have done
Apologize for the critics, I was not able to resist. Yet acknowledge: it is working reliably, useful for many companies, and an interesting service with a rich API.
You could execute a bash file. Here is an example upload.sh script which you could just run as: sh upload.sh yourfile
#!/bin/bash
file=$1
bucket=YOUR_BUCKET
resource="/${bucket}/${file}"
contentType="application/x-itunes-ipa"
dateValue=`date -R`
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
s3Key=YOUR_KEY_HERE
s3Secret=YOUR_SECRET
echo "SENDING TO S3"
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -vv -X PUT -T "${file}" \
-H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${bucket}.s3.amazonaws.com/${file}
more on: http://www.jamesransom.net/?p=58
http://www.jamesransom.net/?p=58
Solved: was missing an CURLOPT for the file size in my code and now everything is working perfectly