Curl successfully uploads the file to S3 using a signed url:
curl -v -k -X PUT \
-H "x-amz-server-side-encryption: AES256" \
-H "Content-Type: application/pdf" \
-T "__tests__/resources/test.pdf" \
"http://mybucket.s3.amazonaws.com/test.pdf?AWSAccessKeyId=IDKEY&Expires=1489458783&Signature=SIGNATURE
I've tried replicating this in Grails using the REST client plugin:
String url = "http://mybucket.s3.amazonaws.com/test.pdf?AWSAccessKeyId=IDKEY&Expires=1489458783&Signature=SIGNATURE"
RestResponse resp = rest.put(url){
header "x-amz-server-side-encryption", "AES256"
header "Content-Type", "application/pdf"
body pdf
}
But Amazon rejects the upload, saying the arguments are incorrect...probably due to the pdf being sent as a "body" parameter. Any ideas?
Instead of using a rest client to upload it would be simpler to use the AWS Java SDK in your Grails app.
See an example here of using a pre-signed url to upload http://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObjectJavaSDK.html
Related
I have an GET based API gateway set up pointing to a Lambda with Lambda Proxy integration enabled
The API has AWS IAM as the auth method.
On my local, I have AWS Auth setup with temp session token
The following works without issue
curl -s GET "https://<ID>.execute-api.us-west-2.amazonaws.com/dev" \
--header "x-amz-security-token: ${SESSION_TOKEN}" \
--user $ACCESS_KEY:$SECRET_KEY \
--aws-sigv4 "aws:amz:us-west-2:execute-api" | jq .
But when I add query params to the url, it fails
curl -s GET "https://<ID>.execute-api.us-west-2.amazonaws.com/dev?a=${v1}&b=${v2}" \
--header "x-amz-security-token: ${SESSION_TOKEN}" \
--user $ACCESS_KEY:$SECRET_KEY \
--aws-sigv4 "aws:amz:us-west-2:execute-api" | jq .
This is the response that I get is
{
"message": "The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\n\nThe Canonical String for this request should have been\n'GET\n/dev\nb=def&a=abc\nhost:<ID>.execute-api.us-west-2.amazonaws.com\nx-amz-date:20230104T112344Z\n\nhost;x-amz-date\<date-token>'\n\nThe String-to-Sign should have been\n'AWS4-HMAC-SHA256\n20230104T112344Z\n20230104/us-west-2/execute-api/aws4_request\<token>'\n"
}
Looks like I need to add the query params part to the signature part. How do I do that ? Or is there something else that I'm missing ?
I am unable to successfully acquire an id token/access token from my AWS cognito user pool when I supply an auth code. I have written a shell script (see below), and receive invalid_grant back from the server.
I have encoded the base64 Authorization Basic header for client_id:client_secret generated with python as:
import base64
encode='my_client_id_string:my_client_secret_string'
base64.b64encode(encode)
#!/usr/bin/env sh
curl --location --request POST 'https://<domain>.auth.us-east-2.amazoncognito.com/oauth2/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--header 'Authorization: Basic <base64 encode string client_id:client_secret>' \
--data-urlencode 'grant_type=authorization_code' \
--data-urlencode 'client_id=<client_id from app settings' \
--data-urlencode 'code=<code received from redirect url to my localhost app endpoint>' \
--data-urlencode 'redirect_uri=http://localhost:8000/my_redirect'
Any ideas?
Solved it!
The problem was caused by an invalid client id. I had supplied a typo for the client id value!
Recently I started using AWS pre-signed URLs to upload files to S3. The generated pre-signed URLs are working perfectly when using Python's Requests library as follows:
Generating the pre-signed url:
def create_presigned_post(bucket_name, object_name,
fields=None, conditions=None, expiration=3600):
"""Generate a presigned URL S3 POST request to upload a file
:param bucket_name: string
:param object_name: string
:param fields: Dictionary of prefilled form fields
:param conditions: List of conditions to include in the policy
:param expiration: Time in seconds for the presigned URL to remain valid
:return: Dictionary with the following keys:
url: URL to post to
fields: Dictionary of form fields and values to submit with the POST
:return: None if error.
"""
# Generate a presigned S3 POST URL
s3_client = boto3.client('s3')
try:
response = s3_client.generate_presigned_post(bucket_name,
object_name,
Fields=fields,
Conditions=conditions,
ExpiresIn=expiration)
except ClientError as e:
logging.error(e)
return None
# The response contains the presigned URL and required fields
return response
Running the request to get the presigned url
# Getting a presigned_url to upload the file into S3 Bucket.
headers = {'Content-type': 'application/json', 'request': 'upload_url', 'target': FILENAME, 'x-api-key': API_KEY}
r_upload = requests.post(url = API_ENDPOINT, headers = headers)
url = json.loads(json.loads(r_upload.text)['body'])['url']
fields_ = json.loads(json.loads(r_upload.text)['body'])['fields']
fields = {
"x-amz-algorithm": fields_["x-amz-algorithm"],
"key": fields_["key"],
"policy": fields_["policy"],
"x-amz-signature": fields_["x-amz-signature"],
"x-amz-date": fields_["x-amz-date"],
"x-amz-credential": fields_["x-amz-credential"],
"x-amz-security-token": fields_["x-amz-security-token"]
}
fileobj = open(FILENAME, 'rb')
http_response = requests.post(url, data=fields,files={'file': (FILENAME, fileobj)})
Valid Response
"{\"url\": \"https://****.s3.amazonaws.com/\",
\"fields\":
{\"key\": \"******\", \"x-amz-algorithm\": \"*******\", \"x-amz-credential\": \"*******\", \"x-amz-date\": \"*********\", \"x-amz-security-token\": \"********", \"policy\": \"**********\", \"x-amz-signature\": \"*******\"}}
And as you can see I'm providing no AWSAccessKey or any credentials when uploading the file using the generated pre-signed URL and this is so logical, as the pre-signed URL is created to be given for external users who have to provide no credentials when using such URL.
However and when trying to run the same call made by Python's Requests library, using cURL, the request is failing with the error:
< HTTP/1.1 403 Forbidden
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><Error>
To get the exact request call made by requests.post, I'm running:
req = http_response.request
command = "curl -X {method} -H {headers} -d '{data}' '{uri}'"
method = "PUT"
uri = req.url
data = req.body
headers = ['"{0}: {1}"'.format(k, v) for k, v in req.headers.items()]
headers = " -H ".join(headers)
print(command.format(method=method, headers=headers, data=data, uri=uri))
Which returns:
curl -v -X PUT -H "Connection: keep-alive" --upload-file xxxx.zip -H "Accept-Encoding: gzip, deflate" -H "Accept: */*" -H "User-Agent: python-requests/2.18.4" -H "Content-Length: xxxx" -H "Content-Type: multipart/form-data; boundary=8a9864bdxxxxx00100ba04cc055a" -d '--8a9864bd377041xxxxx04cc055a
Content-Disposition: form-data; name="x-amz-algorithm"
AWS4-HMAC-SHA256
--8a9864bd377041e0b00100ba04cc055a
Content-Disposition: form-data; name="key"
xxxxx.zip
--8a9864bd377041e0b00100ba04cc055a
Content-Disposition: form-data; name="x-amz-signature"
*****
--8a9864bd377041e0b00100ba04cc055a
Content-Disposition: form-data; name="x-amz-security-token"
*****
--8a9864bd377041e0b00100ba04cc055a
Content-Disposition: form-data; name="x-amz-date"
*****
--8a9864bd377041e0b00100ba04cc055a
Content-Disposition: form-data; name="policy"
*****
--8a9864bd377041e0b00100ba04cc055a
Content-Disposition: form-data; name="x-amz-credential"
xxxxx/xxxxx/xxxx/s3/aws4_request
' 'https://xxxxx.s3.amazonaws.com/'
Then reformulate it:
$ curl -v -T file "https://****.s3.amazonaws.com/?key=************&x-amz-algorithm=***************&x-amz-credential=*************&x-amz-security-token=************&policy=**********&x-amz-signature=****************
After researching, I found nothing similar to this issue, but:
https://aws.amazon.com/es/premiumsupport/knowledge-center/s3-access-denied-error/
This still seem not logical to me because I'm not supposed to enter any credentials when using a pre-signed URL.
I don't know if I'm missing something of the complete request made by Python's Requests library.
Any ideas, please!
Kind regards,
Rshad
This simple curl command should work:
With a usual presigned url, it would be as follows:
curl -v \
-F key=<filename> \
-F x-amz-algorithm=*** \
-F x-amz-credential=*** \
-F x-amz-date=*** \
-F x-amz-security-token=*** \
-F policy=*** \
-F x-amz-signature=*** \
-F file=#<filename> \
'https://<bucket>.s3.amazonaws.com/'
The -F field allows you to specify the additional POST data that should be uploaded to S3 (i.e. from the fields data returned w/ the pre-signed URLs.
Kind regards,
I am trying to create a bash script to upload files to my s3 bucket. I am having difficulty generating the correct signature.
I get the following error message:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
Here is my script:
Thanks for your help!
#!/usr/bin/env bash
#upload to S3 bucket
sourceFilePath="$1"
#file path at S3
folderPathAtS3="packages";
#S3 bucket region
region="eu-central-1"
#S3 bucket name
bucket="my-bucket-name";
#S3 HTTP Resource URL for your file
resource="/${bucket}/${folderPathAtS3}";
#set content type
contentType="gzip";
#get date as RFC 7231 format
dateValue="$(date +'%a, %d %b %Y %H:%M:%S %z')"
acl="x-amz-acl:private"
#String to generate signature
stringToSign="PUT\n\n${contentType}\n${dateValue}\n${acl}\n${resource}";
#S3 key
s3Key="my-key";
#S3 secret
s3Secret="my-secret-code";
#Generate signature, Amazon re-calculates the signature and compares if it matches the one that was contained in your request. That way the secret access key never needs to be transmitted over the network.
signature=$(echo -en "${stringToSign}" | openssl sha1 -hmac ${s3Secret} -binary | base64);
#Curl to make PUT request.
curl -L -X PUT -T "${sourceFilePath}" \
-H "Host: ${bucket}.${region}.amazonaws.com" \
-H "Date: ${dateValue}" \
-H "Content-Type: ${contentType}" \
-H "$acl" \
-H "Authorization: AWS ${s3Key}:${signature}" \
https://s3.amazonaws.com/${bucket}/${folderPathAtS3}
Your signature seems fine, but your request is wrong and consequently does not match.
-H "Host: ${bucket}.${region}.amazonaws.com" \ is incorrect.
The correct value is ${bucket}.s3 ${region}.amazonaws.com. You're overlooking the s3. in the hostname... but even if correct, this is still invalidj because your URL https://s3.amazonaws.com/${bucket}/... also includes the bucket, which means your bucket name is being implicitly added to the beginning of the object key because it appears twice.
Additionally, https://s3.amazonaws.com is us-east-1. To connect to the correct region, your URL needs to be one of these variants:
https://${region}.s3.amazonaws.com/${bucket}/${folderPathAtS3}
https://${bucket}.${region}.s3.amazonaws.com/${folderPathAtS3}
https://${bucket}.s3.amazonaws.com/${folderPathAtS3}
Use one of these formats, and eliminate -H "Host: ..." because it will then be redundant.
The last of the 3 URL formats will only start to work after the bucket is more than a few minutes or hours old. S3 creates these automatically but it takes some time.
I am trying to retrieve data from the SDC API protected by Kerberos. Initially i am posting the credentials to the SCH login page and then using the cookies generated to access the SDC rest api. However, i am not able to post the credentials. Response code is 401 and hence not able to access api.
dpm_auth_creds = {"userName":"", "password":"" }
headers = {"Content-Type": "application/json", "X-Requested-By": "SDC"}
auth_request = requests.post("https://url:18641/sch/security/users" , data=json.dumps(dpm_auth_creds), headers=headers, verify="file.pem")
cookies = auth_request.cookies
print(auth_request.status_code)
print(auth_request.headers)
url = requests.get("https://url:18641/jobrunner/rest/v1/sdcs", cookies=cookies)
print(url.text)
Response code is 401: for auth_request.status_code
This is from the REST API page in Control Hub:
# login to Control Hub security app
curl -X POST -d '{"userName":"DPMUserID", "password": "DPMUserPassword"}' https://cloud.streamsets.com/security/public-rest/v1/authentication/login --header "Content-Type:application/json" --header "X-Requested-By:SCH" -c cookie.txt
# generate auth token from security app
sessionToken=$(cat cookie.txt | grep SSO | rev | grep -o '^\S*' | rev)
echo "Generated session token : $sessionToken"
# Call SDC REST APIs using auth token
curl -X GET https://cloud.streamsets.com/security/rest/v1/currentUser --header "Content-Type:application/json" --header "X-Requested-By:SCH" --header "X-SS-REST-CALL:true" --header "X-SS-User-Auth-Token:$sessionToken" -i
So your Python code should be more like:
dpm_auth_creds = {"userName":"", "password":"" }
headers = {"Content-Type": "application/json", "X-Requested-By": "SDC"}
auth_request = requests.post("https://url:18641/security/public-rest/v1/authentication/login" , data=json.dumps(dpm_auth_creds), headers=headers, verify="file.pem")
cookies = auth_request.cookies
print(auth_request.status_code)
print(auth_request.headers)
# Need to pass value of SS-SSO-LOGIN cookie as X-SS-User-Auth-Token header
headers = {
"Content-Type":"application/json",
"X-Requested-By":"SCH",
"X-SS-REST-CALL":"true",
"X-SS-User-Auth-Token":auth_request.cookies['SS-SSO-LOGIN']
}
url = requests.get("https://url:18641/jobrunner/rest/v1/sdcs", headers=headers)
print(url.text)