FFmpeg ignores some HTTP options when using the PUT method - amazon-web-services

I am using FFmpeg to create a CMAF stream and I upload it to an AWS resource (AWS MediaStore) using the PUT method of FFMpeg.
I need to pass the Content-Type header when uploading manifests & segments.
I have 3 type of files:
application/x-mpegURL : m3u8 manifest
application/dash+xml : mpd manifest
video/mp4 : video segments
Currently, all the types are set to Binary - octet-stream in the AWS resource (AWS MediaStore).
As I will upload a huge number of files, I can't use AWS Lambda functions to set the correct content type after a file as been uploaded.
FFmpeg upload logs
[https # 0x555fe7a7d1c0] Opening 'https://XXXX.YYYY.amazonaws.com/chunk-stream0-00001.mp4' for writing
[https # 0x555fe7a7d0c0] request: PUT /chunk-stream0-00001.mp4 HTTP/1.1
Transfer-Encoding: chunked
User-Agent: Lavf/58.28.100
Accept: */*
Connection: keep-alive
Host: XXXXX.YYYY.amazonaws.com
Icy-MetaData: 1
My tries
I tried static builds & master branch of FFMpeg.
I tried different ways to pass the content type, without success :
-mime_type 1 -headers "Content-type: video/mp4\r\n"
-mime_type "video/mp4,application/dash+xml,application/x-mpegURL"
-content_type application/dash+xml
-multiple_requests 1 -headers "a:b" -icy 0
Upload command :
./ffmpeg -re -i ~/videos/BigBuckBunny.mp4 -loglevel debug \
-map 0 -map 0 -map 0 -c:a aac -c:v libx264 -tune zerolatency \
-b:v:0 2000k -s:v:0 1280x720 -profile:v:0 high -b:v:1 1500k -s:v:1 640x340 -profile:v:1 main -b:v:2 500k -s:v:2 320x170 -profile:v:2 baseline -bf 1 \
-keyint_min 24 -g 24 -sc_threshold 0 -b_strategy 0 -ar:a:1 22050 -use_timeline 1 -use_template 1 -window_size 5 \
-adaptation_sets "id=0,streams=v id=1,streams=a" -hls_playlist 1 -seg_duration 3 -streaming 1 \
-strict experimental -lhls 1 -remove_at_exit 0 -master_m3u8_publish_rate 3 \
-f dash -method PUT -http_persistent 1 https://example.com/manifest.mpd
Any help would be highly appreciated.
Reference:
https://www.ffmpeg.org/ffmpeg-protocols.html#http

Related

Recreating a file-based POST request from Django-Rest-Framework test in curl [duplicate]

I would like to use cURL to not only send data parameters in HTTP POST but to also upload files with specific form name. How should I go about doing that ?
HTTP Post parameters:
userid = 12345
filecomment = This is an image file
HTTP File upload:
File location = /home/user1/Desktop/test.jpg
Form name for file = image (correspond to the $_FILES['image'] at the PHP side)
I figured part of the cURL command as follows:
curl -d "userid=1&filecomment=This is an image file" --data-binary #"/home/user1/Desktop/test.jpg" localhost/uploader.php
The problem I am getting is as follows:
Notice: Undefined index: image in /var/www/uploader.php
The problem is I am using $_FILES['image'] to pick up files in the PHP script.
How do I adjust my cURL commands accordingly ?
You need to use the -F option:
-F/--form <name=content> Specify HTTP multipart POST data (H)
Try this:
curl \
-F "userid=1" \
-F "filecomment=This is an image file" \
-F "image=#/home/user1/Desktop/test.jpg" \
localhost/uploader.php
Catching the user id as path variable (recommended):
curl -i -X POST -H "Content-Type: multipart/form-data"
-F "data=#test.mp3" http://mysuperserver/media/1234/upload/
Catching the user id as part of the form:
curl -i -X POST -H "Content-Type: multipart/form-data"
-F "data=#test.mp3;userid=1234" http://mysuperserver/media/upload/
or:
curl -i -X POST -H "Content-Type: multipart/form-data"
-F "data=#test.mp3" -F "userid=1234" http://mysuperserver/media/upload/
Here is my solution, I have been reading a lot of posts and they were really helpful. Finally I wrote some code for small files, with cURL and PHP that I think its really useful.
public function postFile()
{
$file_url = "test.txt"; //here is the file route, in this case is on same directory but you can set URL too like "http://examplewebsite.com/test.txt"
$eol = "\r\n"; //default line-break for mime type
$BOUNDARY = md5(time()); //random boundaryid, is a separator for each param on my post curl function
$BODY=""; //init my curl body
$BODY.= '--'.$BOUNDARY. $eol; //start param header
$BODY .= 'Content-Disposition: form-data; name="sometext"' . $eol . $eol; // last Content with 2 $eol, in this case is only 1 content.
$BODY .= "Some Data" . $eol;//param data in this case is a simple post data and 1 $eol for the end of the data
$BODY.= '--'.$BOUNDARY. $eol; // start 2nd param,
$BODY.= 'Content-Disposition: form-data; name="somefile"; filename="test.txt"'. $eol ; //first Content data for post file, remember you only put 1 when you are going to add more Contents, and 2 on the last, to close the Content Instance
$BODY.= 'Content-Type: application/octet-stream' . $eol; //Same before row
$BODY.= 'Content-Transfer-Encoding: base64' . $eol . $eol; // we put the last Content and 2 $eol,
$BODY.= chunk_split(base64_encode(file_get_contents($file_url))) . $eol; // we write the Base64 File Content and the $eol to finish the data,
$BODY.= '--'.$BOUNDARY .'--' . $eol. $eol; // we close the param and the post width "--" and 2 $eol at the end of our boundary header.
$ch = curl_init(); //init curl
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
'X_PARAM_TOKEN : 71e2cb8b-42b7-4bf0-b2e8-53fbd2f578f9' //custom header for my api validation you can get it from $_SERVER["HTTP_X_PARAM_TOKEN"] variable
,"Content-Type: multipart/form-data; boundary=".$BOUNDARY) //setting our mime type for make it work on $_FILE variable
);
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/1.0 (Windows NT 6.1; WOW64; rv:28.0) Gecko/20100101 Firefox/28.0'); //setting our user agent
curl_setopt($ch, CURLOPT_URL, "api.endpoint.post"); //setting our api post url
curl_setopt($ch, CURLOPT_COOKIEJAR, $BOUNDARY.'.txt'); //saving cookies just in case we want
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1); // call return content
curl_setopt ($ch, CURLOPT_FOLLOWLOCATION, 1); navigate the endpoint
curl_setopt($ch, CURLOPT_POST, true); //set as post
curl_setopt($ch, CURLOPT_POSTFIELDS, $BODY); // set our $BODY
$response = curl_exec($ch); // start curl navigation
print_r($response); //print response
}
With this we should be get on the "api.endpoint.post" the following vars posted. You can easily test with this script, and you should be receive this debugs on the function postFile() at the last row.
print_r($response); //print response
public function getPostFile()
{
echo "\n\n_SERVER\n";
echo "<pre>";
print_r($_SERVER['HTTP_X_PARAM_TOKEN']);
echo "/<pre>";
echo "_POST\n";
echo "<pre>";
print_r($_POST['sometext']);
echo "/<pre>";
echo "_FILES\n";
echo "<pre>";
print_r($_FILEST['somefile']);
echo "/<pre>";
}
It should work well, they may be better solutions but this works and is really helpful to understand how the Boundary and multipart/from-data mime works on PHP and cURL library.
if you are uploading binary file such as csv, use below format to upload file
curl -X POST \
'http://localhost:8080/workers' \
-H 'authorization: eyJhbGciOiJIUzI1NiIsInR5cCI6ImFjY2VzcyIsInR5cGUiOiJhY2Nlc3MifQ.eyJ1c2VySWQiOjEsImFjY291bnRJZCI6MSwiaWF0IjoxNTExMzMwMzg5LCJleHAiOjE1MTM5MjIzODksImF1ZCI6Imh0dHBzOi8veW91cmRvbWFpbi5jb20iLCJpc3MiOiJmZWF0aGVycyIsInN1YiI6ImFub255bW91cyJ9.HWk7qJ0uK6SEi8qSeeB6-TGslDlZOTpG51U6kVi8nYc' \
-H 'content-type: application/x-www-form-urlencoded' \
--data-binary '#/home/limitless/Downloads/iRoute Masters - Workers.csv'
After a lot of tries this command worked for me:
curl -v -F filename=image.jpg -F upload=#image.jpg http://localhost:8080/api/upload
The issue that lead me here turned out to be a basic user error - I wasn't including the # sign in the path of the file and so curl was posting the path/name of the file rather than the contents. The Content-Length value was therefore 8 rather than the 479 I expected to see given the legnth of my test file.
The Content-Length header will be automatically calculated when curl reads and posts the file.
curl -i -H "Content-Type: application/xml" --data "#test.xml" -v -X POST https://<url>/<uri/
...
< Content-Length: 479
...
Posting this here to assist other newbies in future.
As an alternative to curl, you can use HTTPie, it'a CLI, cURL-like tool for humans.
Installation instructions: https://github.com/jakubroztocil/httpie#installation
Then, run:
http -f POST http://localhost:4040/api/users username=johnsnow photo#images/avatar.jpg
HTTP/1.1 200 OK
Access-Control-Expose-Headers: X-Frontend
Cache-control: no-store
Connection: keep-alive
Content-Encoding: gzip
Content-Length: 89
Content-Type: text/html; charset=windows-1251
Date: Tue, 26 Jun 2018 11:11:55 GMT
Pragma: no-cache
Server: Apache
Vary: Accept-Encoding
X-Frontend: front623311
...
I got it worked with this command curl -F 'filename=#/home/yourhomedirextory/file.txt' http://yourserver/upload
cat test.txt
file test.txt content.
curl -v -F "hello=word" -F "file=#test.txt" https://httpbin.org/post
> POST /post HTTP/2
> Host: httpbin.org
> user-agent: curl/7.68.0
> accept: */*
> content-length: 307
> content-type: multipart/form-data; boundary=------------------------78a9f655d8c87a53
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
* We are completely uploaded and fine
< HTTP/2 200
< date: Mon, 15 Nov 2021 06:18:47 GMT
< content-type: application/json
< content-length: 510
< server: gunicorn/19.9.0
< access-control-allow-origin: *
< access-control-allow-credentials: true
<
{
"args": {},
"data": "",
"files": {
"file": "file test.txt content.\n"
},
"form": {
"hello": "word"
},
"headers": {
"Accept": "*/*",
"Content-Length": "307",
"Content-Type": "multipart/form-data; boundary=------------------------78a9f655d8c87a53",
"Host": "httpbin.org",
"User-Agent": "curl/7.68.0",
"X-Amzn-Trace-Id": "Root=1-6191fbc7-6c68fead194d943d07148860"
},
"json": null,
"origin": "43.129.xx.xxx",
"url": "https://httpbin.org/post"
}
Here is how to correctly escape arbitrary filenames of uploaded files with bash:
#!/bin/bash
set -eu
f="$1"
f=${f//\\/\\\\}
f=${f//\"/\\\"}
f=${f//;/\\;}
curl --silent --form "uploaded=#\"$f\"" "$2"
save all sent files to folder:
php file on host. u.php:
<?php
$uploaddir = 'C:/VALID_DIR/';
echo '<pre>';
foreach ($_FILES as $key => $file) {
if(!isset($file) || !isset($file['name'])) continue;
$uploadfile = $uploaddir . basename($file['name']);
if (move_uploaded_file($file['tmp_name'], $uploadfile)) {
echo "$key file > $uploadfile .\n";
} else {
echo " Error $key file.\n";
}
}
print_r($_FILES);
print "</pre>";?>
Usage from client:
curl -v -F filename=ff.xml -F upload=#ff.xml https://myhost.com/u.php
This is worked for me.
My VM crashed it has only internet connection.
I recovered some files this way.

how to parse Http json response and fail or pass job based on that?

I have an gitlab ci yaml file. and 2 jobs. My .gitlab-ci.yaml file is:
variables:
MSBUILD_PATH: 'C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe'
SOLUTION_PATH: 'Source/NewProject.sln'
stages:
- build
- trigger_IT_service
build_job:
stage: build
script:
- '& "$env:MSBUILD_PATH" "$env:SOLUTION_PATH" /nologo /t:Rebuild /p:Configuration=Debug'
trigger_IT_service_job:
stage: trigger_IT_service
script:
- 'curl http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer'
And It's my trigger_IT_service job report:
Running on DIGITALIZATION...
00:00
Fetching changes with git depth set to 50...
00:05
Reinitialized existing Git repository in D:/GitLab-Runner/builds/c11pExsu/0/personalname/newproject/.git/
Checking out 24be087a as master...
Removing Output/
git-lfs/2.5.2 (GitHub; windows amd64; go 1.10.3; git 8e3c5c93)
Skipping Git submodules setup
$ curl http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer
00:02
StatusCode : 200
StatusDescription : 200
Content : {"status":200,"message":"SAP transfer started. Please
check in db","errorCode":0,"timestamp":"2020-03-25T13:53:05
.722+0300","responseObject":null}
RawContent : HTTP/1.1 200 200
Keep-Alive: timeout=10
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json;charset=UTF-8
Date: Wed, 25 Mar 2020 10:53:05 GMT
Server: Apache
I have to control the this report "Content" part in gitlab ci yaml
If "message" is "SAP transfer started. Please check in db" the pipeline should pass otherwise must be failed.
Actually my question is:
how to parse Http json response and fail or pass job based on that
Thank you for all your helps.
Best way would be to install some tool to parse json and use it, different examples here
Given json example from comment:
{
"status": 200,
"message": "SAP transfer started. Please check in db",
"errorCode": 0,
"timestamp": "2020-03-25T17:06:43.430+0300",
"responseObject": null
}
If you can install python3 on your runner you could achieve it all with script:
import requests; # note this might require additional install with pip install requests
message = requests.get('http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer').json()['message']
if message != 'SAP transfer started. Please check in db':
print('Invalid message: ' + message)
exit(1)
else:
print('Message ok')
So trigger_IT_service stage in your yaml would be:
trigger_IT_service_job:
stage: trigger_IT_service
script: >
python -c "import requests; message = requests.get('http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer').json()['message']; (print('Invalid message: ' + message), exit(1)) if message != 'SAP transfer started. Please check in db' else (print('Message ok'), exit(0))"

Got 404 from http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI

On AWS ECS or AWS CodeBuild etc, when trying to retrieve as credentials using:
http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
suddenly since Feb 7, 2019 - I got 404 not found !
curl -qL -o aws_credentials.json http://169.254.170.2/$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
The expected result should be a valid json of the AWS Credentials session
After short investigation:
I found that $AWS_CONTAINER_CREDENTIALS_RELATIVE_URI already starts by a slash '/'
[e.g AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/xxxx-xxxx-xxxx-xxxx-xxxxx]
Solution: just remove the slash after the IP.*
e.g http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
TL;DR;
I run curl with -v on AWS CodeBuild:
> GET //v2/credentials/xxxx-xxxx-xxxx-xxxx-xxxxx HTTP/1.1
> Host: 169.254.170.2
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
Conclusion: since Feb 6 or 7 2019, AWS add a strict check and broke the request with 404
for double slash //

youtube-dl error:Cannot download a video and extract audio into the same file

I used the exact same youtube-dl command without the playlist option to download individual audio files, and it worked. But when I use it for this playlist, I get an error: Cannot download a video and extract audio into the same file! Use "(ext)s.%(ext)s" instead of "(ext)s" as the output template
Running on windows 10. Any help would be greatly appreciated!!
PS C:\xxx\FFMPEG> .\YouTubeBatchAudioPlaylistIndexes.bat
C:\xxx\FFMPEG>call bin\youtube-dl.exe -x --audio-format "mp3" --audio-quality 3 --batch-file="songs.txt" --playlist-items 4,6,7,8,10,11,16,17,20,21,23,25,27,28,31,33,36,38,39,41,43,45,46,48,50 -o"C:\Users\xxx\Downloads\%(title)s.%(ext)s" --verbose
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-x', '--audio-format', 'mp3', '--audio-quality', '3', '--batch-file=songs.txt', '--playlist-items', '4,6,7,8,10,11,16,17,20,21,23,25,27,28,31,33,36,38,39,41,43,45,46,48,50', '-oC:\\Users\\xxx\\Downloads\\(ext)s', '--verbose']
[debug] Batch file urls: ['https://www.youtube.com/watch?v=anurOHpo0aY&index=4&list=PLlRluznmnq9f7OMI4avwFyV2xMVxlV3_w&t=0s']
Usage: youtube-dl.exe [OPTIONS] URL [URL...]
youtube-dl.exe: error: Cannot download a video and extract audio into the same file! Use "C:\Users\xxx\Downloads\(ext)s.%(ext)s" instead of "C:\Users\xxx\Downloads\(ext)s" as the output template
If you look at the output, you see that the percent signs in your output template were gobbled up:
(...) '-oC:\\Users\\xxx\\Downloads\\(ext)s', '--verbose']
That is because in a batch file, you need to write %% if you want a percent sign, and double that again for call, like this:
call bin\youtube-dl.exe -x --audio-format "mp3" --audio-quality 3 ^
--batch-file="songs.txt" --playlist-items ^
4,6,7,8,10,11,16,17,20,21,23,25,27,28,31,33,36,38,39,41,43,45,46,48,50 ^
-o "C:\Users\xxx\Downloads\%%%%(title)s.%%%%(ext)s" --verbose

Uploading to Amazon S3 using cURL/libcurl

I am currently trying to develop an application to upload files to an Amazon S3 bucket using cURL and c++. After carefully reading the S3 developers guide I have started implementing my application using cURL and forming the Header as described by the Developers guide and after lots of trials and errors to determine the best way to create the S3 signature, I am now facing a 501 error. The received header suggests that the method I'm using is not implemented. I am not sure where I'm wrong but here is the HTTP header that I'm sending to amazon:
PUT /test1.txt HTTP/1.1
Accept: */*
Transfer-Encoding: chunked
Content-Type: text/plain
Content-Length: 29
Host: [BucketName].s3.amazonaws.com
Date: [Date]
Authorization: AWS [Access Key ID]:[Signature]
Expect: 100-continue
I have truncated the Bucket Name, Access Key ID and Signature for security reasons.
I am not sure what I'm doing wrong but I think that the error is generating because of the Accept and Transfer-Encoding Fields (Not Really Sure). So can anyone tell me what I'm doing wrong or why I'm getting a 501.
The game changed significantly since the question was asked, the simple authorization headers no longer apply, yet it is still feasible to perform with a UNIX shell script, as follows.
Ensure 'openssl' and 'curl' are available at the command line. TIP: double check the openSSL argument syntax as these may vary with different versions of the tool; e.g. openssl sha -sha256 ... versus openssl sha256 ...
Beware, a single extra newline or space character, else the use of CRLF in place of the NewLine char alone would defeat the signature. Note too that you may want to use content types possibly with encodings to prevent any data transformation through the communication media. You may then have to adjust the list of signed headers at several places; please refer to AMAZON S3 API docs for the numerous conventions to keep enforced like alphabetical-lowercase ordering of header info used in hash calculations at several (redundant) places.
# BERHAUZ Nov 2019 - curl script for file upload to Amazon S3 Buckets
test -n "$1" || {
echo "usage: $0 <myFileToSend.txt>"
echo "... missing argument file ..."
exit
}
yyyymmdd=`date +%Y%m%d`
isoDate=`date --utc +%Y%m%dT%H%M%SZ`
# EDIT the next 4 variables to match your account
s3Bucket="myBucket.name.here"
bucketLocation="eu-central-1"
s3AccessKey="THISISMYACCESSKEY123"
s3SecretKey="ThisIsMySecretKeyABCD1234efgh5678"
#endpoint="${s3Bucket}.s3-${bucketLocation}.amazonaws.com"
endpoint="s3-${bucketLocation}.amazonaws.com"
fileName="$1"
contentLength=`cat ${fileName} | wc -c`
contentHash=`openssl sha256 -hex ${fileName} | sed 's/.* //'`
canonicalRequest="PUT\n/${s3Bucket}/${fileName}\n\ncontent-length:${contentLength}\nhost:${endpoint}\nx-amz-content-sha256:${contentHash}\nx-amz-date:${isoDate}\n\ncontent-length;host;x-amz-content-sha256;x-amz-date\n${contentHash}"
canonicalRequestHash=`echo -en ${canonicalRequest} | openssl sha256 -hex | sed 's/.* //'`
stringToSign="AWS4-HMAC-SHA256\n${isoDate}\n${yyyymmdd}/${bucketLocation}/s3/aws4_request\n${canonicalRequestHash}"
echo "----------------- canonicalRequest --------------------"
echo -e ${canonicalRequest}
echo "----------------- stringToSign --------------------"
echo -e ${stringToSign}
echo "-------------------------------------------------------"
# calculate the signing key
DateKey=`echo -n "${yyyymmdd}" | openssl sha256 -hex -hmac "AWS4${s3SecretKey}" | sed 's/.* //'`
DateRegionKey=`echo -n "${bucketLocation}" | openssl sha256 -hex -mac HMAC -macopt hexkey:${DateKey} | sed 's/.* //'`
DateRegionServiceKey=`echo -n "s3" | openssl sha256 -hex -mac HMAC -macopt hexkey:${DateRegionKey} | sed 's/.* //'`
SigningKey=`echo -n "aws4_request" | openssl sha256 -hex -mac HMAC -macopt hexkey:${DateRegionServiceKey} | sed 's/.* //'`
# then, once more a HMAC for the signature
signature=`echo -en ${stringToSign} | openssl sha256 -hex -mac HMAC -macopt hexkey:${SigningKey} | sed 's/.* //'`
authoriz="Authorization: AWS4-HMAC-SHA256 Credential=${s3AccessKey}/${yyyymmdd}/${bucketLocation}/s3/aws4_request, SignedHeaders=content-length;host;x-amz-content-sha256;x-amz-date, Signature=${signature}"
curl -v -X PUT -T "${fileName}" \
-H "Host: ${endpoint}" \
-H "Content-Length: ${contentLength}" \
-H "x-amz-date: ${isoDate}" \
-H "x-amz-content-sha256: ${contentHash}" \
-H "${authoriz}" \
http://${endpoint}/${s3Bucket}/${fileName}
I must acknowledge that, for someone a bit involved in cryptography like me, the Amazon signature scheme deserves numerous critics:
there's much redundancy in the information being signed,
the 5 step HMAC cascade is almost inverting semantics between key seed and data where 1 step would suffice with proper usage and same security
the last 12 characters of the secret key are useless here, because the significant key length of a SHA256 HMAC is ... 256 bits, hence 32 bytes, of which the first 4 always start with "AWS4" for just no purpose.
overall AWS S3 API re-invents standards where a S/MIME payload would have done
Apologize for the critics, I was not able to resist. Yet acknowledge: it is working reliably, useful for many companies, and an interesting service with a rich API.
You could execute a bash file. Here is an example upload.sh script which you could just run as: sh upload.sh yourfile
#!/bin/bash
file=$1
bucket=YOUR_BUCKET
resource="/${bucket}/${file}"
contentType="application/x-itunes-ipa"
dateValue=`date -R`
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${resource}"
s3Key=YOUR_KEY_HERE
s3Secret=YOUR_SECRET
echo "SENDING TO S3"
signature=`echo -en ${stringToSign} | openssl sha1 -hmac ${s3Secret} -binary | base64`
curl -vv -X PUT -T "${file}" \
-H "Host: ${bucket}.s3.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://${bucket}.s3.amazonaws.com/${file}
more on: http://www.jamesransom.net/?p=58
http://www.jamesransom.net/?p=58
Solved: was missing an CURLOPT for the file size in my code and now everything is working perfectly