S3 AWS Invalid according to Policy: Policy Condition failed: ["eq", "$content-type", "audio/wav"] - amazon-web-services

I'm trying to uploading objects to S3 using presigned URLs
Here is my python code: (refer from this post: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-presigned-urls.html)
import json
URL = "https://exmaple.server-give-me-presigned-URLs"
r = requests.get(url = URL)
response = r.json()
print("debug type payload: ", type(response))
print("debug key: ", response.keys())
print("debug value: ", response["jsonresult"])
print("debug ", response["signedupload"]["fields"])
print("debug ", response["signedupload"]["url"])
print("url type ", type(response["signedupload"]["url"]))
with open("test-wav-file.wav", 'rb') as f:
files = {'file': ("/test-wav-file.wav", f)}
http_response = requests.post(response["signedupload"]["url"], data=response["signedupload"]["fields"], files=files)
print("File upload HTTP: ", http_response.text)
I got the error when run:
('File upload HTTP: ', u'<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy Condition failed: ["eq", "$content-type", "audio/wav"]</Message><RequestId>1TRNC1XPX4MCJPN6</RequestId><HostId>h/YdSUDuPeZhUU1TqAu1BZrCfyXKiNTYvisbvp3iaLcoLoriQPREnJI1LZp69hDE4kOWYSVog7A=</HostId></Error>')
But when I change header content-type to headers = {'Content-type': 'audio/wav'}
I got the error:
('File upload HTTP: ', u'<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>PreconditionFailed</Code><Message>At least one of the pre-conditions you specified did not hold</Message><Condition>Bucket POST must be of the enclosure-type multipart/form-data</Condition><RequestId>S86QYFG0AYTY26WQ</RequestId><HostId>lvxNkydNcuiwE/UVZY2xRMBoqk/BSUn7qathgXWSu3Fii8ZlVlKDkEjOotw4fmU3bfFgjYbsspE=</HostId></Error>')
So do we have any kind of content-type satisfy all condition? Please help me
Many thanks

I solve my problem:
just add 2 fields "content-type": "audio/wav" and "x-amz-meta-tag": "" in the payload and it work :D
x-amz-meta-tag can be any value.
Hope to help someone like me

Related

AWS sagemaker endpoint received client (400) error

I've deployed a tensorflow multi-label classification model using a sagemaker endpoint as follows:
predictor = sagemaker_model.deploy(initial_instance_count=1, instance_type="ml.m5.2xlarge", endpoint_name='testing-2')
It gets deployed and works fine when I invoke it from the Sagemaker Jupyter instance:
sample = ['this movie was extremely good']
output=predictor.predict(sample)
output:
{'predictions': [[0.00370046496,
4.32942124e-06,
0.00080883503,
9.25126587e-05,
0.00023958087,
0.000130862]]}
However, I am unable to send a request to the deployed endpoint from other notebooks or sagemaker studio. I'm unsure of the request format.
I've tried several variations in the input format and still failed. The error message is as below:
sagemaker error
Request:
{
"body": {
"text": "Testing model's prediction on this text"
},
"contentType": "application/json",
"endpointName": "testing-2",
"customURL": "",
"customHeaders": [
{
"Key": "sm_endpoint_name",
"Value": "testing-2"
}
]
}
Error:
Error invoking endpoint: Received client error (400) from primary with message "{ "error": "Failed to process element:
0 key: text of 'instances' list. Error: INVALID_ARGUMENT: JSON object: does not have named input: text" }".
See https://us-west-2.console.aws.amazon.com/cloudwatch/home?region=us-west-2#logEventViewer:group=/aws/sagemaker/Endpoints/testing-2
in account 793433463428 for more information.
Is there any way to find out exactly how the model expects the request format to be?
Earlier I had the same model on my local system and the way I tested it was using this curl request:
curl -s -H 'Content-Type: application/json' -d '{"text": "what ugly posts"}' http://localhost:7070/sentiment
And it worked fine without any issues.
I've tried different formats and replaced the "text" key inside body with other words like "input", "body", nothing etc.
Based on your description above, I assume you are deploying the TensorFlow model using the SageMaker TensorFlow container.
If you want to view what your model expects as input you can use the saved_model CLI:
1
├── keras_metadata.pb
├── saved_model.pb
└── variables
├── variables.data-00000-of-00001
└── variables.index
!saved_model_cli show --all --dir {"1"}
After you have confirmed the input name above you can invoke the endpoint as follows:
import json
import boto3
client = boto3.client('runtime.sagemaker')
data = {"instances": ['this movie was extremely good']}
response = client.invoke_endpoint(EndpointName=<EndpointName>,
Body=json.dumps(data))
response_body = response['Body']
print(response_body.read())
The same payload can then also be used in Studio when invoking the endpoint.

Private API gateway with IAM authentication not liking my security token

I have a private API gateway with a / endpoint and a /foo with IAM auth enabled.
I created a policy which I attached to my instance's role :
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"execute-api:Invoke"
],
"Resource": [
"arn:aws:execute-api:*:*:*"
]
}
]
}
I have some code to do the AWS signature stuff and also used Postman to create a snippet with the same key/secret/session token. Both give the same result on /foo. It always says :
{"message":"The security token included in the request is invalid"}
I had a concern that the docs do not say you can attach the policy to a role only a user or group.
https://aws.amazon.com/premiumsupport/knowledge-center/iam-authentication-api-gateway/
That whole page doesn't mention roles once... Can I attach a policy to a role to use it with IAM auth'ed API gateway??
The / endpoint returns me a 200 response, and my API GW resource policy denies/allows access to *. If I can get to /, I can get to /foo. (And if I disable the IAM auth, I can get /foo)
The VPC endpoint allows * on *.
In the execution logs I see nothing for the failed attempts.
The attempts to / log : API Key authorized because method 'GET /' does not require API Key. Request will not contribute to throttle or quota limits
And I can see the X-Amz-Security-Token in the payload.
But requests to /foo don't appear there, only the access logs. And I've added some fields but nothing that sheds any light on the problem.
Anything I'm forgetting?? And any ideas why isn't it working?
Here is my signing python, there may be a bug, but it is getting the same error as Postman, which makes me think not! Replace the host/endpoint and path to your own. I added a few print debug lines to show the intermediate steps because I did get some errors about the canonical URL being wrong to start with.
#!/usr/bin/python3
# This is based on the AWS General Reference
# Signing AWS API Requests top available at
# https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html
# To use it :
# pip3 install requests
# To use it on instance with instance role :
# sudo yum -y -q install jq
# export AWS_ACCESS_KEY_ID=$(curl 169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance | jq -r .AccessKeyId)
# export AWS_SECRET_ACCESS_KEY=$(curl 169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance | jq -r .SecretAccessKey)
# export AWS_SESSION_TOKEN=$(curl 169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance | jq -r .Token)
# See: http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html
# This version makes a GET request and passes the signature
# in the Authorization header.
import sys, os, base64, datetime, hashlib, hmac
import requests # pip install requests
# ************* REQUEST VALUES *************
method = 'GET'
service = 'execute-api'
host = 'xxx.execute-api.eu-west-2.amazonaws.com'
region = 'eu-west-2'
endpoint = 'https://xxx.execute-api.eu-west-2.amazonaws.com'
path='/stage/foo/'
request_parameters = ''
# Key derivation functions. See:
# http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python
def sign(key, msg):
return hmac.new(key, msg.encode('utf-8'), hashlib.sha256).digest()
def getSignatureKey(key, dateStamp, regionName, serviceName):
kDate = sign(('AWS4' + key).encode('utf-8'), dateStamp)
kRegion = sign(kDate, regionName)
kService = sign(kRegion, serviceName)
kSigning = sign(kService, 'aws4_request')
return kSigning
# Read AWS access key from env. variables or configuration file. Best practice is NOT
# to embed credentials in code.
access_key = os.environ.get('AWS_ACCESS_KEY_ID')
secret_key = os.environ.get('AWS_SECRET_ACCESS_KEY')
session_token = os.environ.get('AWS_SESSION_TOKEN')
if access_key is None or secret_key is None:
print('No access key is available.')
sys.exit()
# Create a date for headers and the credential string
t = datetime.datetime.utcnow()
amzdate = t.strftime('%Y%m%dT%H%M%SZ')
datestamp = t.strftime('%Y%m%d') # Date w/o time, used in credential scope
# ************* TASK 1: CREATE A CANONICAL REQUEST *************
# http://docs.aws.amazon.com/general/latest/gr/sigv4-create-canonical-request.html
# Step 1 is to define the verb (GET, POST, etc.)--already done.
# Step 2: Create canonical URI--the part of the URI from domain to query
# string (use '/' if no path)
canonical_uri = path
# Step 3: Create the canonical query string. In this example (a GET request),
# request parameters are in the query string. Query string values must
# be URL-encoded (space=%20). The parameters must be sorted by name.
# For this example, the query string is pre-formatted in the request_parameters variable.
canonical_querystring = request_parameters
# Step 4: Create the canonical headers and signed headers. Header names
# must be trimmed and lowercase, and sorted in code point order from
# low to high. Note that there is a trailing \n.
canonical_headers = 'host:' + host + '\n' + 'x-amz-date:' + amzdate + '\n' 'x-amz-security-token:' + session_token + '\n'
# Step 5: Create the list of signed headers. This lists the headers
# in the canonical_headers list, delimited with ";" and in alpha order.
# Note: The request can include any headers; canonical_headers and
# signed_headers lists those that you want to be included in the
# hash of the request. "Host" and "x-amz-date" are always required.
signed_headers = 'host;x-amz-date;x-amz-security-token'
# Step 6: Create payload hash (hash of the request body content). For GET
# requests, the payload is an empty string ("").
payload_hash = hashlib.sha256(('').encode('utf-8')).hexdigest()
# Step 7: Combine elements to create canonical request
canonical_request = method + '\n' + canonical_uri + '\n' + canonical_querystring + '\n' + canonical_headers + '\n' + signed_headers + '\n' + payload_hash
print ("CANONICAL REQUEST : " + canonical_request)
print ()
# ************* TASK 2: CREATE THE STRING TO SIGN*************
# Match the algorithm to the hashing algorithm you use, either SHA-1 or
# SHA-256 (recommended)
algorithm = 'AWS4-HMAC-SHA256'
credential_scope = datestamp + '/' + region + '/' + service + '/' + 'aws4_request'
string_to_sign = algorithm + '\n' + amzdate + '\n' + credential_scope + '\n' + hashlib.sha256(canonical_request.encode('utf-8')).hexdigest()
print ("STRING TO SIGN : " + string_to_sign )
# ************* TASK 3: CALCULATE THE SIGNATURE *************
# Create the signing key using the function defined above.
signing_key = getSignatureKey(secret_key, datestamp, region, service)
# Sign the string_to_sign using the signing_key
signature = hmac.new(signing_key, (string_to_sign).encode('utf-8'), hashlib.sha256).hexdigest()
# ************* TASK 4: ADD SIGNING INFORMATION TO THE REQUEST *************
# The signing information can be either in a query string value or in
# a header named Authorization. This code shows how to use a header.
# Create authorization header and add to request headers
authorization_header = algorithm + ' ' + 'Credential=' + access_key + '/' + credential_scope + ', ' + 'SignedHeaders=' + signed_headers + ', ' + 'Signature=' + signature
# The request can include any headers, but MUST include "host", "x-amz-date",
# and (for this scenario) "Authorization". "host" and "x-amz-date" must
# be included in the canonical_headers and signed_headers, as noted
# earlier. Order here is not significant.
# Python note: The 'host' header is added automatically by the Python 'requests' library.
headers = {'x-amz-date':amzdate, 'Authorization':authorization_header, 'X-Amz-Security-Token':session_token}
# ************* SEND THE REQUEST *************
request_url = endpoint + path + canonical_querystring
print('\nBEGIN REQUEST++++++++++++++++++++++++++++++++++++')
print('Request URL = ' + request_url)
print('Headers = ' + str(headers))
r = requests.get(request_url, headers=headers)
print('\nRESPONSE++++++++++++++++++++++++++++++++++++')
print('Response code: %d\n' % r.status_code)
print(r.text)
Aha, I found an answer. I was pulling the credentials from the wrong endpoint. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-data-categories.html shows "identity-credentials/ec2/security-credentials/ec2-instance" are "Internal use only". There is an iam/security-credentials/{role} that works a LOT better!!

How to delete multiple key from digitalocean space object storage?

I am trying to delete multiple object once by using delete_objects. But i am getting an error. I didn’t found any solution regarding to this issue.
client = boto3.client("s3", **config)
response = client.delete_objects(
Bucket=BUCKET,
Delete={
'Objects': [
{
'Key': 'asdasd1.png',
},
{
'Key': 'asdasd1.png',
}
]
},
RequestPayer='requester'
)
I get an error like this:
An error occurred (NotImplemented) when calling the DeleteObjects operation: Unknown
INFO: 127.0.0.1:46958 - "DELETE /image/ HTTP/1.1" 500 Internal Server Error
maybe this can help you, it's another way to do it
def cleanup_from_s3(bucket, remote_path):
s3_contents = list_s3(bucket, remote_path)
if s3_contents == []:
return
for s3_content in s3_contents:
filename = s3_content["Key"]
s3_client.delete_object(Bucket=bucket, Key="{0}/{1}".format(remote_path, filename))

AWS Presigned URL works with Python's Requests library but fails with cURL

Recently I started using AWS pre-signed URLs to upload files to S3. The generated pre-signed URLs are working perfectly when using Python's Requests library as follows:
Generating the pre-signed url:
def create_presigned_post(bucket_name, object_name,
fields=None, conditions=None, expiration=3600):
"""Generate a presigned URL S3 POST request to upload a file
:param bucket_name: string
:param object_name: string
:param fields: Dictionary of prefilled form fields
:param conditions: List of conditions to include in the policy
:param expiration: Time in seconds for the presigned URL to remain valid
:return: Dictionary with the following keys:
url: URL to post to
fields: Dictionary of form fields and values to submit with the POST
:return: None if error.
"""
# Generate a presigned S3 POST URL
s3_client = boto3.client('s3')
try:
response = s3_client.generate_presigned_post(bucket_name,
object_name,
Fields=fields,
Conditions=conditions,
ExpiresIn=expiration)
except ClientError as e:
logging.error(e)
return None
# The response contains the presigned URL and required fields
return response
Running the request to get the presigned url
# Getting a presigned_url to upload the file into S3 Bucket.
headers = {'Content-type': 'application/json', 'request': 'upload_url', 'target': FILENAME, 'x-api-key': API_KEY}
r_upload = requests.post(url = API_ENDPOINT, headers = headers)
url = json.loads(json.loads(r_upload.text)['body'])['url']
fields_ = json.loads(json.loads(r_upload.text)['body'])['fields']
fields = {
"x-amz-algorithm": fields_["x-amz-algorithm"],
"key": fields_["key"],
"policy": fields_["policy"],
"x-amz-signature": fields_["x-amz-signature"],
"x-amz-date": fields_["x-amz-date"],
"x-amz-credential": fields_["x-amz-credential"],
"x-amz-security-token": fields_["x-amz-security-token"]
}
fileobj = open(FILENAME, 'rb')
http_response = requests.post(url, data=fields,files={'file': (FILENAME, fileobj)})
Valid Response
"{\"url\": \"https://****.s3.amazonaws.com/\",
\"fields\":
{\"key\": \"******\", \"x-amz-algorithm\": \"*******\", \"x-amz-credential\": \"*******\", \"x-amz-date\": \"*********\", \"x-amz-security-token\": \"********", \"policy\": \"**********\", \"x-amz-signature\": \"*******\"}}
And as you can see I'm providing no AWSAccessKey or any credentials when uploading the file using the generated pre-signed URL and this is so logical, as the pre-signed URL is created to be given for external users who have to provide no credentials when using such URL.
However and when trying to run the same call made by Python's Requests library, using cURL, the request is failing with the error:
< HTTP/1.1 403 Forbidden
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><Error>
To get the exact request call made by requests.post, I'm running:
req = http_response.request
command = "curl -X {method} -H {headers} -d '{data}' '{uri}'"
method = "PUT"
uri = req.url
data = req.body
headers = ['"{0}: {1}"'.format(k, v) for k, v in req.headers.items()]
headers = " -H ".join(headers)
print(command.format(method=method, headers=headers, data=data, uri=uri))
Which returns:
curl -v -X PUT -H "Connection: keep-alive" --upload-file xxxx.zip -H "Accept-Encoding: gzip, deflate" -H "Accept: */*" -H "User-Agent: python-requests/2.18.4" -H "Content-Length: xxxx" -H "Content-Type: multipart/form-data; boundary=8a9864bdxxxxx00100ba04cc055a" -d '--8a9864bd377041xxxxx04cc055a
Content-Disposition: form-data; name="x-amz-algorithm"
AWS4-HMAC-SHA256
--8a9864bd377041e0b00100ba04cc055a
Content-Disposition: form-data; name="key"
xxxxx.zip
--8a9864bd377041e0b00100ba04cc055a
Content-Disposition: form-data; name="x-amz-signature"
*****
--8a9864bd377041e0b00100ba04cc055a
Content-Disposition: form-data; name="x-amz-security-token"
*****
--8a9864bd377041e0b00100ba04cc055a
Content-Disposition: form-data; name="x-amz-date"
*****
--8a9864bd377041e0b00100ba04cc055a
Content-Disposition: form-data; name="policy"
*****
--8a9864bd377041e0b00100ba04cc055a
Content-Disposition: form-data; name="x-amz-credential"
xxxxx/xxxxx/xxxx/s3/aws4_request
' 'https://xxxxx.s3.amazonaws.com/'
Then reformulate it:
$ curl -v -T file "https://****.s3.amazonaws.com/?key=************&x-amz-algorithm=***************&x-amz-credential=*************&x-amz-security-token=************&policy=**********&x-amz-signature=****************
After researching, I found nothing similar to this issue, but:
https://aws.amazon.com/es/premiumsupport/knowledge-center/s3-access-denied-error/
This still seem not logical to me because I'm not supposed to enter any credentials when using a pre-signed URL.
I don't know if I'm missing something of the complete request made by Python's Requests library.
Any ideas, please!
Kind regards,
Rshad
This simple curl command should work:
With a usual presigned url, it would be as follows:
curl -v \
-F key=<filename> \
-F x-amz-algorithm=*** \
-F x-amz-credential=*** \
-F x-amz-date=*** \
-F x-amz-security-token=*** \
-F policy=*** \
-F x-amz-signature=*** \
-F file=#<filename> \
'https://<bucket>.s3.amazonaws.com/'
The -F field allows you to specify the additional POST data that should be uploaded to S3 (i.e. from the fields data returned w/ the pre-signed URLs.
Kind regards,

Getting a 403 while creating a user, followed every step right so far

Looks like I have followed every step (given that the documentation is extremely lacking, it is sourced from multiple places). This is my code:
def create_user(cred_file_location, user_first_name, user_last_name, user_email):
cred_data = json.loads(open(cred_file_location).read())
access_email = cred_data['client_email']
private_key = cred_data['private_key']
# I have tried with the scope as a single string, and also
# as an array of a single string. Neither worked
credentials = SignedJwtAssertionCredentials(access_email, private_key, ["https://www.googleapis.com/auth/admin.directory.user"])
http = Http()
http = credentials.authorize(http)
service = build('admin', 'directory_v1', http=http)
users = service.users()
userinfo = {
'primaryEmail': user_email,
'name': {
'givenName': user_first_name,
'familyName': user_last_name
},
'password': ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(80))
}
users.insert(body=userinfo).execute()
I downloaded the JSON key right, and it is loading it correctly. This is my JSON key (I am redacting certain parts of identifying information, I have kept some of it there to show that I am loading the correct info):
{
"type": "service_account",
"private_key_id": "c6ae56a9cb267fe<<redacted>>",
"private_key": "<<redacted>>",
"client_email": "account-1#<<redacted>>.iam.gserviceaccount.com",
"client_id": "10931536<<redacted>>",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/account-1%40<<redacted>>.iam.gserviceaccount.com"
}
This is how these credentials look in the developer console:
I have also enabled sitewide access for the service account:
I have no clue as to why I am still getting these 403s:
File "/usr/lib/python2.7/site-packages/googleapiclient/http.py", line 729, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 403 when requesting https://www.googleapis.com/admin/directory/v1/users?alt=json returned "Not Authorized to access this resource/api">
Any help is greatly appreciated.
Finally, on some random stackoverflow answer, I found the solutin. I have to sub as a user to execute any request. Esentially:
credentials = SignedJwtAssertionCredentials(
access_email,
private_key,
["https://www.googleapis.com/auth/admin.directory.user"])
changes to:
credentials = SignedJwtAssertionCredentials(
access_email,
private_key,
["https://www.googleapis.com/auth/admin.directory.user"],
sub="user#example.org")
Where all requests will now be made as if they were made on behalf of user#example.org.