AWS Pre-Signed Post URL suddenly stopped working - amazon-web-services

So I have been working with aws-s3 post signed URLs for a month now and it was working as a charm all of sudden( I didn't change any policies for my IAM user or bucket) it start giving me forbidden request.
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Invalid according to Policy: Policy expired.</Message>
</Error>
I found that AWS sent me an email informing me that my trail ended does this has anything to do with it.
Note:I can still upload files to my s3 manually
Edit
The code
const params = {
Bucket: 'ratemycourses',
Fields: {
key: `profileImage/${userId}/profile.jpeg`,
acl: 'public-read',
'Content-Type': 'multipart/form-data',
},
Expires: 60,
};
const data = await s3.createPresignedPost(params) //I made the callback function promisifed;
return data;

The expiration element in your POST policy specifies the expiration date/time of the policy. It looks like your policy has expired. Correct the policy expiration, and then re-create your signed URL.
Here's an example of a POST policy:
{
"expiration": "2021-07-10T12:00:00.000Z",
"conditions": [
{"bucket": "mybucket" },
["starts-with", "$key", "user/shahda/"],
]
}

Related

Cannot give aws-lambda access to aws-appsync API

I am working on a project where users can upload files into a S3 bucket, these uploaded files are mapped to a GraphQL key (which was generated by Amplify CLI), and an aws-lambda function is triggered. All of this is working, but the next step I want is for this aws-lambda function to create a second file with the same ownership attributes and POST the location of the saved second file to the GraphQL API.
I figured that this shouldn't be too difficult but I am having a lot of difficulty and can't understand where the problem lies.
BACKGROUND/DETAILS
I want the owner of the data (the uploader) to be the only user who is able to access the data, with the aws-lambda function operating in an admin role and able to POST/GET to API of any owner.
The GraphQL schema looks like this:
type FileUpload #model
#auth(rules: [
{ allow: owner}]) {
id: ID!
foo: String
bar: String
}
And I also found this seemingly-promising AWS guide which I thought would give an IAM role admin access (https://docs.amplify.aws/cli/graphql/authorization-rules/#configure-custom-identity-and-group-claims) which I followed by creating the file amplify/backend/api/<your-api-name>/custom-roles.json and saved it with
{
"adminRoleNames": ["<YOUR_IAM_ROLE_NAME>"]
}
I replaced "<YOUR_IAM_ROLE_NAME>" with an IAM Role which I have given broad access to, including this appsync access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"appsync:*"
],
"Resource": "*"
}
]
}
Which is the role given to my aws-lambda function.
When I attempt to run a simple API query in my aws-lambda function with the above settings I get this error
response string:
{
"data": {
"getFileUpload": null
},
"errors": [
{
"path": [
"getFileUpload"
],
"data": null,
"errorType": "Unauthorized",
"errorInfo": null,
"locations": [
{
"line": 3,
"column": 11,
"sourceName": null
}
],
"message": "Not Authorized to access getFileUpload on type Query"
}
]
}
my actual python lambda script is
import http
API_URL = '<MY_API_URL>'
API_KEY = '<>MY_API_KEY'
HOST = API_URL.replace('https://','').replace('/graphql','')
def queryAPI():
conn = http.client.HTTPSConnection(HOST, 443)
headers = {
'Content-type': 'application/graphql',
'x-api-key': API_KEY,
'host': HOST
}
print('conn: ', conn)
query = '''
{
getFileUpload(id: "<ID_HERE>") {
description
createdAt
baseFilePath
}
}
'''
graphql_query = {
'query': query
}
query_data = json.dumps(graphql_query)
print('query data: ', query_data)
conn.request('POST', '/graphql', query_data, headers)
response = conn.getresponse()
response_string = response.read().decode('utf-8')
print('response string: ', response_string)
I pass in the API key and API URL above in addition to giving AWS-lambda the IAM role. I understand that only one is probably needed, but I am trying to get the process to work then pare it back.
QUESTION(s)
As far as I understand, I am
providing the appropriate #auth rules to my GraphQL schema based on my goals and (2 below)
giving my aws-lambda function sufficient IAM authorization (via both IAM role and API key) to override any potential restrictive #auth rules of my GraphQL schema
But clearly something is not working. Can anyone point me towards a problem that I am overlooking?
I had similar problem just yesterday.
It was not 1:1 what you're trying to do, but maybe it's still helpful.
So I was trying to give lambda functions permissions to access the data based on my graphql schema. The schema had different #auth directives, which caused the lambda functions to not have access to the data anymore. Even though I gave them permissions via the cli and IAM roles. Although the documentation says this should work, it didn't:
if you grant a Lambda function in your Amplify project access to the GraphQL API via amplify update function, then the Lambda function's IAM execution role is allow-listed to honor the permissions granted on the Query, Mutation, and Subscription types.
Therefore, these functions have special access privileges that are scoped based on their IAM policy instead of any particular #auth rule.
So I ended up adding #auth(rules: [{ allow: custom }]) to all parts of my schema that I want to access via lambda functions.
When doing this, make sure to add "lambda" as auth mode to your api via amplify update api.
In the authentication lambda function, you could then check if the user, who is invoking the function, has access to the requested query/S3 Data.

restore elasticsearch snapshot from a different s3 region

I have an AWS ElasticSearch domain in eu-west-1 region and have taken a snapshot to an S3 bucket sub folder also in the same region.
I have also deployed a second AWS ElasticSearch domain in another aws region - eu-west-2.
Added an S3 bucket replication between the buckets but when I try to register the repository on the eu-west-2 AWS ES domain, I get the following error:
500
{"error":{"root_cause":[{"type":"blob_store_exception","reason":"Failed to check if blob [master.dat] exists"}],"type":"blob_store_exception","reason":"Failed to check if blob [master.dat] exists","caused_by":{"type":"amazon_s3_exception","reason":"Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 14F0571DFF522922; S3 Extended Request ID: U1OnlKPOkfCNFzoV9HC5WBHJ+kfhAZDMOG0j0DzY5+jwaRFJvHkyzBacilA4FdIqDHDYWPCrywU=)"}},"status":500}
this is the code i am using to register the repository on the new cluster (taken from https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-snapshots.html#es-managedomains-snapshot-restore):
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://search-**es-elk-prod**.eu-west-2.es.amazonaws.com/' # include https:// and trailing /
region = 'eu-west-2' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
# Register repository
path = '_snapshot/es-elk-prod' # the Elasticsearch API endpoint
url = host + path
payload = {
"type": "s3",
"settings": {
"bucket": "es-prod-eu-west-2",
"region": "eu-west-2",
"role_arn": "arn:aws:iam::1234567:role/EsProd-***-snapshotS3role-***"
}
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers)
print(r.status_code)
print(r.text)
from the logs, i get:
curl -X GET 'https://search-**es-elk-prod**.eu-west-2.es.amazonaws.com/_snapshot/es-mw-elk-prod/_all?pretty'
{
"error" : {
"root_cause" : [
{
"type" : "amazon_s3_exception",
"reason" : "Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 72A59132E2830D81; S3 Extended Request ID: o0XalToNp19HDJKSOVxmna71hx3LkwoSFEobm3HQGH1HEzxOrAtYHg+asnKxJ03iGSDDhUz5GUI=)"
}
],
"type" : "amazon_s3_exception",
"reason" : "Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 72A59132E2830D81; S3 Extended Request ID: o0XalToNp19HDJKSOVxmna71hx3LkwoSFEobm3HQGH1HEzxOrAtYHg+asnKxJ03iGSDDhUz5GUI=)"
},
"status" : 500
}
the ARN is able to access the S3 bucket as is the same ARN i use to snapshot the eu-west-2 domain to S3 as the eu-west-1 snapshot is stored in a sub-folder on the S3 bucket, I added a path to the code, such that:
payload = {
"type": "s3",
"settings": {
"bucket": "es-prod-eu-west-2",
"path": "es-elk-prod",
"region": "eu-west-2",
"role_arn": "arn:aws:iam::1234567:role/EsProd-***-snapshotS3role-***"
}
}
but this didn't work either.
What is the correct way to restore snapshot created in one aws region into another aws region?
Any advice is much appreciated.
I've had similar but not identical error messages about The bucket is in this region: eu-west-1. Please use this region to retry the request when moving from eu-west-1 to us-west-2.
According to Amazon's documentation (under "Migrating data to a different domain") you will need to specify an endpoint rather than a region:
If you encounter this error, try replacing "region": "us-east-2" with "endpoint": "s3.amazonaws.com" in the PUT statement and retry the request.

AWS S3 file (redirected from AWS Lambda Function) responds to CORS preflight OPTIONS request with a 403 error

I have an AWS S3 bucket configured as a static website with the following CORS configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I also have a Lambda function that redirects to a specific page on the aforementioned S3 bucket. Here is a gist of the code:
module.exports.endPoint = (event, context, callback) => {
// Do some cool processing and on success:
redirectToUrl(303, 's3-bucket.amazon.com/page.html', callback);
}
function redirectToUrl(statusCode, url, callback) {
const response = {
statusCode: statusCode,
headers: {
Location: url,
'Access-Control-Allow-Origin': '*',
'Content-Type': 'application/json'
},
body: '',
};
console.log('Redirecting with status code ' + statusCode + ' to ' + url);
callback(null, response);
}
I'm able to access the S3 HTML page directly from the browser using the same URL in the code. Yes, the API domain is different from the S3 domain:
api.domain.com --> initiates the request (redirection)
sub.domain.com/page.html --> requested resource (redirection target)
The server responds to the CORS preflight OPTIONS request with a 403 error, and the browser reports the following error message:
Access to XMLHttpRequest at 'S3 File' (redirected from 'API Endpoint')
from origin 'null' has been blocked by CORS policy: Response to
preflight request doesn't pass access control check: No
'Access-Control-Allow-Origin' header is present on the requested
resource.
I originally set up the site with Serverless Framework by adding the following lines to my serverless.yml:
SiteBucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: PublicRead
BucketName: ${self:custom.siteName}
WebsiteConfiguration:
IndexDocument: index.html
ErrorDocument: error.html
CorsConfiguration:
CorsRules:
- AllowedMethods:
- GET
- HEAD
AllowedOrigins:
- "*"
MaxAge: 3000
SiteBucketPolicy:
Type: "AWS::S3::BucketPolicy"
DependsOn: "SiteBucket"
Properties:
Bucket: ${self:custom.siteName}
PolicyDocument:
Statement:
- Effect: Allow
Principal: "*"
Action:
- "s3:GetObject"
Resource:
- "arn:aws:s3:::${self:custom.siteName}/*"
Once the bucket was created via Serverless Framework and seeing the CORS errors, I finagled the CORS policy on S3 manually without success.
It's also worth noting that the S3 site is set up with CloudFront Distribution but I'm not sure if that makes a difference.
This should be an easy fix, but it's proving to be quite tough. Please help.
Isn't this supposed to include a proper URL:
redirectToUrl(303, 's3-bucket.amazon.com/page.html');
I think changing it to the following might fix your issue:
redirectToUrl(303, 'https://s3-bucket.amazon.com/page.html');

AWS S3 IAM user can't access bucket

I have an IAM user called server that uses s3cmd to backup up to S3.
s3cmd sync /path/to/file-to-send.bak s3://my-bucket-name/
Which gives:
ERROR: S3 error: 403 (SignatureDoesNotMatch): The request signature we calculated does not match the signature you provided. Check your key and signing method.
The same user can send email via SES so I know that the access_key and secret_key are correct.
I have also attached AmazonS3FullAccess policy to the IAM user and clicked on Simulate policy. I added all of the Amazon S3 actions and then clicked Run simulation. All of the actions were allowed so it seems that S3 thinks I should have access. The policy is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
The only way I can get access is to use use the root accounts access_key and secret_key. I can not get any IAM user to be able to login.
Using s3cmd --debug gives:
DEBUG: Response: {'status': 403, 'headers': {'x-amz-bucket-region': 'eu-west-1', 'x-amz-id-2': 'XXX', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': 'XXX', 'date': 'Tue, 30 Aug 2016 09:10:52 GMT', 'content-type': 'application/xml'}, 'reason': 'Forbidden', 'data': '<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>XXX</AWSAccessKeyId><StringToSign>GET\n\n\n\nx-amz-date:Tue, 30 Aug 2016 09:10:53 +0000\n/XXX/</StringToSign><SignatureProvided>XXX</SignatureProvided><StringToSignBytes>XXX</StringToSignBytes><RequestId>490BE76ECEABF4B3</RequestId><HostId>XXX</HostId></Error>'}
DEBUG: ConnMan.put(): connection put back to pool (https://XXX.s3.amazonaws.com#1)
DEBUG: S3Error: 403 (Forbidden)
Where I have replaced anything sensitive looking with XXX.
Have I missed something in the permissions setup?
explictly use the correct iam access key and secret key used with the s3cmd ie
s3cmd --access_key=75674745756 --secret_key=F6AFHDGFTFJGHGH sync /path/to/file-to-send.bak s3://my-bucket-name/
The error shown is for an incorrect access key and/or secret key

uploading files to amazon web server

I am trying to upload files using amazon web services, but I am getting this error as shown below, because of which the files are not being uploaded to the server:
{
"data": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy Condition failed: [\"starts-with\", \"$filename\", \"\"]</Message><RequestId>7A20103396D365B2</RequestId><HostId>xxxxxxxxxxxxxxxxxxxxx</HostId></Error>",
"status": 403,
"config": {
"method": "POST",
"transformRequest": [null],
"transformResponse": [null],
"url": "https://giblib-verification.s3.amazonaws.com/",
"data": {
"key": "#usc.edu/1466552912155.jpg",
"AWSAccessKeyId": "xxxxxxxxxxxxxxxxx",
"acl": "private",
"policy": "xxxxxxxxxxxxxxxxxxxxxxx",
"signature": "xxxxxxxxxxxxxxxxxxxx",
"Content-Type": "image/jpeg",
"file": "file:///storage/emulated/0/Android/data/com.ionicframework.giblibion719511/cache/1466552912155.jpg"
},
"_isDigested": true,
"_chunkSize": null,
"headers": {
"Accept": "application/json, text/plain, ​*/*​"
},
"_deferred": {
"promise": {}
}
},
"statusText": "Forbidden"
}
Can anyone tell me what is the reason for the forbidden 403 response? Thanks in advance
You need to provide more details. Which client are you using? From the looks of it, there is a policy that explicitly denies this upload.
It looks like you're user does not have proper permissions for that specific S3 bucket. Use AWS console our IAM to assign proper permissions, including write.
More importantly immediately invalidate the key/secret pair, and rename the bucket. Never share actual keys our passwords on open sites. Someone is likely already using your credentials as you read this.
Read the error: Invalid according to Policy: Policy Condition failed: [\"starts-with\", \"$filename\", \"\"].
Your policy document imposes a restriction on the upload that you are not meeting, and S3 is essentially denying the upload because you told it to.
There is no reason to include this condition in your signed policy document. According to the documentation, this means you are expecting a form field called filename that must not be empty. But there's no such form field. Remove this from your policy and the upload should work.