Using the following code to check for expiry of SSL certs:
cat localdomains | xargs -L 1 bash -c 'openssl s_client -connect $0:443 -servername $0 2> /dev/null | openssl x509 -noout -enddate | cut -d = -f 2 | xargs -I {} echo {} $0'
When I run into domains with no cert returning, I am trying to wrap my head around how to change the output to something like N/A, instead of trying to evaluate "cut -d = -f 2" and then xargs -I
If you capture the output of your command in a variable you can then validate it. Assuming this doesn't have to be a one liner:
#!/bin/bash
while read domain; do
expiry=$(openssl s_client -connect ${domain}:443 -servername ${domain} 2>/dev/null </dev/null | \
openssl x509 -noout -enddate 2>&1 | cut -d = -f 2)
# validate output with date
if date -d "${expiry}" > /dev/null 2>/dev/null ; then
echo ${expiry} ${domain}
else
echo "N/A" ${domain}
fi
done
Note a couple of things:
Redirect /dev/null into stdin of openssl s_client to get the prompt back (see here for technical details)
Redirect stderr of the second openssl x509 to stdout in order to validate against it.
You could use grep or sed to validate the output. I found date to be convenient (and if you wanted to use it to reformat the date, it would be extra-convenient).
I tested this solution by putting it into a file check_cert_expiry.sh:
$ cat localdomains
stackoverflow.com
example.example
google.com
$ cat localdomains | ./check_cert_expiry.sh
Aug 14 12:00:00 2019 GMT stackoverflow.com
N/A example.example
Feb 21 09:37:00 2018 GMT google.com
Cheers!
Related
I want to download an apk which exists into my private s3 bucket using curl command. I dont want to use awscli/boto3. I have SecretAccessKey, SessionToken, Expiration, AccessKeyId
Tried Following Code:
curl -k -v -L -o url="https://s3-eu-west-1.amazonaws.com" -H "x-amz-security-token: xxxxxxxxxxxxxxxx" -H "Content-Type: application/xml" -X GET https://xyz/test.apk
curl -k -v -L -o url="https://s3-eu-west-1.amazonaws.com" -H "Content-Type: application/xml" -X GET https://xyz/.test.apk?AWSAccessKeyId=xxxxxxxxxxxxx
Here is the script that downloads and upload file to s3, you have to export keys or can modify the script accordingly.
export AWS_ACCESS_KEY_ID=AKxxx
export AWS_SECRET_ACCESS_KEY=zzzz
Download a file
./s3download.sh get s3://mybucket/myfile.txt myfile.txt
That's it, all you need to pass s3 bucket along with file name
#!/bin/bash
set -eu
s3simple() {
local command="$1"
local url="$2"
local file="${3:--}"
# todo: nice error message if unsupported command?
if [ "${url:0:5}" != "s3://" ]; then
echo "Need an s3 url"
return 1
fi
local path="${url:4}"
if [ -z "${AWS_ACCESS_KEY_ID-}" ]; then
echo "Need AWS_ACCESS_KEY_ID to be set"
return 1
fi
if [ -z "${AWS_SECRET_ACCESS_KEY-}" ]; then
echo "Need AWS_SECRET_ACCESS_KEY to be set"
return 1
fi
local method md5 args
case "$command" in
get)
method="GET"
md5=""
args="-o $file"
;;
put)
method="PUT"
if [ ! -f "$file" ]; then
echo "file not found"
exit 1
fi
md5="$(openssl md5 -binary $file | openssl base64)"
args="-T $file -H Content-MD5:$md5"
;;
*)
echo "Unsupported command"
return 1
esac
local date="$(date -u '+%a, %e %b %Y %H:%M:%S +0000')"
local string_to_sign
printf -v string_to_sign "%s\n%s\n\n%s\n%s" "$method" "$md5" "$date" "$path"
local signature=$(echo -n "$string_to_sign" | openssl sha1 -binary -hmac "${AWS_SECRET_ACCESS_KEY}" | openssl base64)
local authorization="AWS ${AWS_ACCESS_KEY_ID}:${signature}"
curl $args -s -f -H Date:"${date}" -H Authorization:"${authorization}" https://s3.amazonaws.com"${path}"
}
s3simple "$#"
You can find more detail here
I'm trying this for couple of days now but getting stuck at signature calculation. for the background, I've an EC2 instance role assign to the instance and also need to use MKS SSE (Server Side encryption) to store data in S3.
So, this is the script I'm using now:
#!/usr/bin/env bash
set -E
export TERM=xterm
#
s3_region='eu-west-1'
s3_bucket='my-s3-file-bucket'
#
bkup_optn='hourly'
data_type='application/octet-stream'
bkup_path="/tmp/backups/${bkup_optn}"
bkup_file="$( ls -t ${bkup_path}|head -n1 )"
timestamp="$( LC_ALL=C date -u "+%Y-%m-%d %H:%M:%S" )"
#
appHost="$( hostname -f )"
thisApp="$( facter -p my_role )"
thisEnv="$( facter -p my_environment )"
upldUri="${thisEnv}/${appHost}/${bkup_optn}/${bkup_file}"
# This doesn't work on OS X
iso_timestamp=$(date -ud "${timestamp}" "+%Y%m%dT%H%M%SZ")
date_scope=$(date -ud "${timestamp}" "+%Y%m%d")
date_header=$(date -ud "${timestamp}" "+%a, %d %h %Y %T %Z")
## AWS instance role
awsMetaUri='http://169.254.169.254/latest/meta-data/iam/security-credentials/'
awsInstRole=$( curl -s ${awsMetaUri} )
awsAccessKey=$( curl -s ${awsMetaUri}${awsInstRole}|awk -F'"' '/AccessKeyId/ {print $4}' )
awsSecretKey=$( curl -s ${awsMetaUri}${awsInstRole}|grep SecretAccessKey|cut -d':' -f2|sed 's/[^0-9A-Za-z/+=]*//g' )
awsSecuToken=$( curl -s ${awsMetaUri}${awsInstRole}|sed -n '/Token/{p;}'|cut -f4 -d'"' )
signedHeader='date;host;x-amz-content-sha256;x-amz-date;x-amz-security-token;x-amz-server-side-encryption;x-amz-server-side-encryption-aws-kms-key-id'
echo -e "awsInstRole => ${awsInstRole}\nawsAccessKey => ${awsAccessKey}\nawsSecretKey => ${awsSecretKey}"
payload_hash()
{
local output=$(shasum -ba 256 "${bkup_path}/${bkup_file}")
echo "${output%% *}"
}
canonical_request()
{
echo "PUT"
echo "/${upldUri}"
echo ""
echo "date:${date_header}"
echo "host:${s3_bucket}.s3.amazonaws.com"
echo "x-amz-security-token:${awsSecuToken}"
echo "x-amz-content-sha256:$(payload_hash)"
echo "x-amz-server-side-encryption:aws:kms"
echo "x-amz-server-side-encryption-aws-kms-key-id:arn:aws:kms:eu-west-1:xxxxxxxx111:key/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx1432"
echo "x-amz-date:${iso_timestamp}"
echo ""
echo "${signedHeader}"
printf "$(payload_hash)"
}
canonical_request_hash()
{
local output=$(canonical_request | shasum -a 256)
echo "${output%% *}"
}
string_to_sign()
{
echo "AWS4-HMAC-SHA256"
echo "${iso_timestamp}"
echo "${date_scope}/${s3_region}/s3/aws4_request"
echo "x-amz-security-token:${awsSecuToken}"
echo "x-amz-server-side-encryption:aws:kms"
echo "x-amz-server-side-encryption-aws-kms-key-id:arn:aws:kms:eu-west-1:xxxxxxxx111:key/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx1432"
printf "$(canonical_request_hash)"
}
signature_key()
{
local secret=$(printf "AWS4${awsSecretKey}" | hex_key)
local date_key=$(printf ${date_scope} | hmac_sha256 "${secret}" | hex_key)
local region_key=$(printf ${s3_region} | hmac_sha256 "${date_key}" | hex_key)
local service_key=$(printf "s3" | hmac_sha256 "${region_key}" | hex_key)
printf "aws4_request" | hmac_sha256 "${service_key}" | hex_key
}
hex_key() {
xxd -p -c 256
}
hmac_sha256() {
local hexkey=$1
openssl dgst -binary -sha256 -mac HMAC -macopt hexkey:${hexkey}
}
signature() {
string_to_sign | hmac_sha256 $(signature_key) | hex_key | sed "s/^.* //"
}
curl \
-T "${bkup_path}/${bkup_file}" \
-H "Authorization: AWS4-HMAC-SHA256 Credential=${awsAccessKey}/${date_scope}/${s3_region}/s3/aws4_request,SignedHeaders=${signedHeader},Signature=$(signature)" \
-H "Date: ${date_header}" \
-H "x-amz-date: ${iso_timestamp}" \
-H "x-amz-security-token:${awsSecuToken}" \
-H "x-amz-content-sha256: $(payload_hash)" \
-H "x-amz-server-side-encryption:aws:kms" \
-H "x-amz-server-side-encryption-aws-kms-key-id:arn:aws:kms:eu-west-1:xxxxxxxx111:key/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx1432" \
"https://${s3_bucket}.s3.amazonaws.com/${upldUri}"
It was written following this doc:
http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html and parts from a sample script in Github, which I forgot to bookmark. After the initial bumpy ride, now I'm keep getting this error:
<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>XXXXXXXXXXXXXXX</AWSAccessKeyId><StringToSign>AWS4-HMAC-SHA256
I went through several AWS docs but I cannot figure out why. Anyone can help me out here please?
-San
I would like to download a latest source code of software (WRF) from some url and automate the installation process thereafter. A sample url like is given below:-
http://www2.mmm.ucar.edu/wrf/src/WRFV3.6.1.TAR.gz
In the above url, the version number may change time to time after the developer release the new version. Now I would like to download the latest available version from the main script. I tried the following:-
wget -k -l 0 "http://www2.mmm.ucar.edu/wrf/src/" -O index.html ; cat index.html | grep -o 'http:[^"]*.gz' | grep 'WRFV'
With above code, I could pull all available version of the software. The output of the above code is below:-
http://www2.mmm.ucar.edu/wrf/src/WRFV2.0.3.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV2.1.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV2.1.2.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV2.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV2.2.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV2.2.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.0.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.0.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.1.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.2.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.2.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.3.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.3.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.4.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.4.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.5.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.5.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.6.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Chem-3.6.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3-Var-do-not-use.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.0.1.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.0.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.1.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.2.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.2.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.2.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.3.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.3.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.4.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.4.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.5.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.5.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.6.1.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.6.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3.TAR.gz
http://www2.mmm.ucar.edu/wrf/src/WRFV3_OVERLAY_3.0.1.1.TAR.gz
However, I am unable to go further to filter out only later version from the link.
Usually, for processing the html-pages i recommendig some perl tools, but because this is an Directory Index output, (probably) can be done by bash tools like grep sed and such...
The following code is divided to several smaller bash functions, for easy changes
#!/bin/bash
#getdata - should output html source of the page
getdata() {
#use wget with output to stdout or curl or fetch
curl -s "http://www2.mmm.ucar.edu/wrf/src/"
#cat index.html
}
#filer_rows - get the filename and the date columns
filter_rows() {
sed -n 's:<tr><td.*href="\([^"]*\)">.*>\([0-9].*\)</td>.*</td>.*</td></tr>:\2#\1:p' | grep "${1:-.}"
}
#sort_by_date - probably don't need comment... sorts the lines by date... ;)
sort_by_date() {
while IFS=# read -r date file
do
echo "$(date --date="$date" +%s)#$file"
done | sort -gr
}
#MAIN
file=$(getdata | filter_rows WRFV | sort_by_date | head -1 | cut -d# -f2)
echo "You want download: $file"
prints
You want download: WRFV3-Chem-3.6.1.TAR.gz
What about adding a numeric sort and taking the top line:
wget -k -l 0 "http://www2.mmm.ucar.edu/wrf/src/" -O index.html ; cat index.html | grep -o 'http:[^"]*.gz' | grep 'WRFV[0-9]*[0-9]\.[0-9]' | sort -r -n | head -1
I am running Accumulo 1.5 in an Ubuntu 12.04 VirtualBox VM. I have set the accumulo-site.xml instance.zookeeper.host file to the VM's IP address, and I can connect to accumulo and run queries from a remote client machine. From the client machine, I can also use a browser to see the hadoop NameNode, browse the filesystem, etc. But I cannot connect to the Accumulo Overview page (port 50095) from anywhere else than directly from the Accumulo VM. There is no firewall between the VM and the client, and besides the Accumulo Overview page not being reachable, everything else seems to work fine.
Is there a config setting that I need to change to allow outside access to the Accumulo Overview console?
thanks
I was able to get the Accumulo monitor to bind to all network interfaces by manually applying this patch:
https://git-wip-us.apache.org/repos/asf?p=accumulo.git;a=commit;h=7655de68
In conf/accumulo-env.sh add:
# Should the monitor bind to all network interfaces -- default: false
export ACCUMULO_MONITOR_BIND_ALL="true"
In bin/config.sh add:
# ACCUMULO-1985 provide a way to use the scripts and still bind to all network interfaces
export ACCUMULO_MONITOR_BIND_ALL=${ACCUMULO_MONITOR_BIND_ALL:-"false"}
And modify bin/start-server.sh to match:
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
bin="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
SOURCE="$(readlink "$SOURCE")"
[[ $SOURCE != /* ]] && SOURCE="$bin/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
done
bin="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
# Stop: Resolve Script Directory
. "$bin"/config.sh
HOST="$1"
host "$1" >/dev/null 2>/dev/null
if [ $? -ne 0 ]; then
LOGHOST="$1"
else
LOGHOST=$(host "$1" | head -1 | cut -d' ' -f1)
fi
ADDRESS="$1"
SERVICE="$2"
LONGNAME="$3"
if [ -z "$LONGNAME" ]; then
LONGNAME="$2"
fi
SLAVES=$( wc -l < ${ACCUMULO_HOME}/conf/slaves )
IFCONFIG=/sbin/ifconfig
if [ ! -x $IFCONFIG ]; then
IFCONFIG='/bin/netstat -ie'
fi
# ACCUMULO-1985 Allow monitor to bind on all interfaces
if [ ${SERVICE} == "monitor" -a ${ACCUMULO_MONITOR_BIND_ALL} == "true" ]; then
ADDRESS="0.0.0.0"
fi
ip=$($IFCONFIG 2>/dev/null| grep inet[^6] | awk '{print $2}' | sed 's/addr://' | grep -v 0.0.0.0 | grep -v 127.0.0.1 | head -n 1)
if [ $? != 0 ]
then
ip=$(python -c 'import socket as s; print s.gethostbyname(s.getfqdn())')
fi
if [ "$HOST" = "localhost" -o "$HOST" = "`hostname`" -o "$HOST" = "$ip" ]; then
PID=$(ps -ef | egrep ${ACCUMULO_HOME}/.*/accumulo.*.jar | grep "Main $SERVICE" | grep -v grep | awk {'print $2'} | head -1)
else
PID=$($SSH $HOST ps -ef | egrep ${ACCUMULO_HOME}/.*/accumulo.*.jar | grep "Main $SERVICE" | grep -v grep | awk {'print $2'} | head -1)
fi
if [ -z $PID ]; then
echo "Starting $LONGNAME on $HOST"
if [ "$HOST" = "localhost" -o "$HOST" = "`hostname`" -o "$HOST" = "$ip" ]; then
#${bin}/accumulo ${SERVICE} --address $1 >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err &
${bin}/accumulo ${SERVICE} --address ${ADDRESS} >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err &
MAX_FILES_OPEN=$(ulimit -n)
else
#$SSH $HOST "bash -c 'exec nohup ${bin}/accumulo ${SERVICE} --address $1 >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err' &"
$SSH $HOST "bash -c 'exec nohup ${bin}/accumulo ${SERVICE} --address ${ADDRESS} >${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.out 2>${ACCUMULO_LOG_DIR}/${SERVICE}_${LOGHOST}.err' &"
MAX_FILES_OPEN=$($SSH $HOST "/usr/bin/env bash -c 'ulimit -n'")
fi
if [ -n "$MAX_FILES_OPEN" ] && [ -n "$SLAVES" ] ; then
if [ "$SLAVES" -gt 10 ] && [ "$MAX_FILES_OPEN" -lt 65536 ]; then
echo "WARN : Max files open on $HOST is $MAX_FILES_OPEN, recommend 65536"
fi
fi
else
echo "$HOST : $LONGNAME already running (${PID})"
fi
Check that the monitor is bound to the correct interface, and not the "localhost" loopback interface. You may have to edit the monitors file in Accumulo's configuration directory with the IP/hostname of the correct interface.
I am currently trying to develop an application to upload files to an Amazon S3 bucket using cURL and c++. After carefully reading the S3 developers guide I have started implementing my application using cURL and forming the Header as described by the Developers guide and after lots of trials and errors to determine the best way to create the S3 signature, I am now facing a 501 error. The received header suggests that the method I'm using is not implemented. I am not sure where I'm wrong but here is the HTTP header that I'm sending to amazon:
PUT /test1.txt HTTP/1.1
Accept: */*
Transfer-Encoding: chunked
Content-Type: text/plain
Content-Length: 29
Host: [BucketName].s3.amazonaws.com
Date: [Date]
Authorization: AWS [Access Key ID]:[Signature]
Expect: 100-continue
I have truncated the Bucket Name, Access Key ID and Signature for security reasons.
I am not sure what I'm doing wrong but I think that the error is generating because of the Accept and Transfer-Encoding Fields (Not Really Sure). So can anyone tell me what I'm doing wrong or why I'm getting a 501.
The game changed significantly since the question was asked, the simple authorization headers no longer apply, yet it is still feasible to perform with a UNIX shell script, as follows.
Ensure 'openssl' and 'curl' are available at the command line. TIP: double check the openSSL argument syntax as these may vary with different versions of the tool; e.g. openssl sha -sha256 ... versus openssl sha256 ...
Beware, a single extra newline or space character, else the use of CRLF in place of the NewLine char alone would defeat the signature. Note too that you may want to use content types possibly with encodings to prevent any data transformation through the communication media. You may then have to adjust the list of signed headers at several places; please refer to AMAZON S3 API docs for the numerous conventions to keep enforced like alphabetical-lowercase ordering of header info used in hash calculations at several (redundant) places.
# BERHAUZ Nov 2019 - curl script for file upload to Amazon S3 Buckets
test -n "$1" || {
echo "usage: $0 <myFileToSend.txt>"
echo "... missing argument file ..."
exit
}
yyyymmdd=`date +%Y%m%d`
isoDate=`date --utc +%Y%m%dT%H%M%SZ`
# EDIT the next 4 variables to match your account
s3Bucket="myBucket.name.here"
bucketLocation="eu-central-1"
s3AccessKey="THISISMYACCESSKEY123"
s3SecretKey="ThisIsMySecretKeyABCD1234efgh5678"
#endpoint="${s3Bucket}.s3-${bucketLocation}.amazonaws.com"
endpoint="s3-${bucketLocation}.amazonaws.com"
fileName="$1"
contentLength=`cat ${fileName} | wc -c`
contentHash=`openssl sha256 -hex ${fileName} | sed 's/.* //'`
canonicalRequest="PUT\n/${s3Bucket}/${fileName}\n\ncontent-length:${contentLength}\nhost:${endpoint}\nx-amz-content-sha256:${contentHash}\nx-amz-date:${isoDate}\n\ncontent-length;host;x-amz-content-sha256;x-amz-date\n${contentHash}"
canonicalRequestHash=`echo -en ${canonicalRequest} | openssl sha256 -hex | sed 's/.* //'`
stringToSign="AWS4-HMAC-SHA256\n${isoDate}\n${yyyymmdd}/${bucketLocation}/s3/aws4_request\n${canonicalRequestHash}"
echo "----------------- canonicalRequest --------------------"
echo -e ${canonicalRequest}
echo "----------------- stringToSign --------------------"
echo -e ${stringToSign}
echo "-------------------------------------------------------"
# calculate the signing key
DateKey=`echo -n "${yyyymmdd}" | openssl sha256 -hex -hmac "AWS4${s3SecretKey}" | sed 's/.* //'`
DateRegionKey=`echo -n "${bucketLocation}" | openssl sha256 -hex -mac HMAC -macopt hexkey:${DateKey} | sed 's/.* //'`
DateRegionServiceKey=`echo -n "s3" | openssl sha256 -hex -mac HMAC -macopt hexkey:${DateRegionKey} | sed 's/.* //'`
SigningKey=`echo -n "aws4_request" | openssl sha256 -hex -mac HMAC -macopt hexkey:${DateRegionServiceKey} | sed 's/.* //'`
# then, once more a HMAC for the signature
signature=`echo -en ${stringToSign} | openssl sha256 -hex -mac HMAC -macopt hexkey:${SigningKey} | sed 's/.* //'`
authoriz="Authorization: AWS4-HMAC-SHA256 Credential=${s3AccessKey}/${yyyymmdd}/${bucketLocation}/s3/aws4_request, SignedHeaders=content-length;host;x-amz-content-sha256;x-amz-date, Signature=${signature}"
curl -v -X PUT -T "${fileName}" \
-H "Host: ${endpoint}" \
-H "Content-Length: ${contentLength}" \
-H "x-amz-date: ${isoDate}" \
-H "x-amz-content-sha256: ${contentHash}" \
-H "${authoriz}" \
http://${endpoint}/${s3Bucket}/${fileName}
I must acknowledge that, for someone a bit involved in cryptography like me, the Amazon signature scheme deserves numerous critics:
there's much redundancy in the information being signed,
the 5 step HMAC cascade is almost inverting semantics between key seed and data where 1 step would suffice with proper usage and same security
the last 12 characters of the secret key are useless here, because the significant key length of a SHA256 HMAC is ... 256 bits, hence 32 bytes, of which the first 4 always start with "AWS4" for just no purpose.
overall AWS S3 API re-invents standards where a S/MIME payload would have done
Apologize for the critics, I was not able to resist. Yet acknowledge: it is working reliably, useful for many companies, and an interesting service with a rich API.
You could execute a bash file. Here is an example upload.sh script which you could just run as: sh upload.sh yourfile
#!/bin/bash
file=$1
bucket=YOUR_BUCKET
resource="/${bucket}/${file}"
contentType="application/x-itunes-ipa"
dateValue=`date -R`
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
s3Key=YOUR_KEY_HERE
s3Secret=YOUR_SECRET
echo "SENDING TO S3"
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -vv -X PUT -T "${file}" \
-H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${bucket}.s3.amazonaws.com/${file}
more on: http://www.jamesransom.net/?p=58
http://www.jamesransom.net/?p=58
Solved: was missing an CURLOPT for the file size in my code and now everything is working perfectly