I am trying to upload files using amazon web services, but I am getting this error as shown below, because of which the files are not being uploaded to the server:
{
"data": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy Condition failed: [\"starts-with\", \"$filename\", \"\"]</Message><RequestId>7A20103396D365B2</RequestId><HostId>xxxxxxxxxxxxxxxxxxxxx</HostId></Error>",
"status": 403,
"config": {
"method": "POST",
"transformRequest": [null],
"transformResponse": [null],
"url": "https://giblib-verification.s3.amazonaws.com/",
"data": {
"key": "#usc.edu/1466552912155.jpg",
"AWSAccessKeyId": "xxxxxxxxxxxxxxxxx",
"acl": "private",
"policy": "xxxxxxxxxxxxxxxxxxxxxxx",
"signature": "xxxxxxxxxxxxxxxxxxxx",
"Content-Type": "image/jpeg",
"file": "file:///storage/emulated/0/Android/data/com.ionicframework.giblibion719511/cache/1466552912155.jpg"
},
"_isDigested": true,
"_chunkSize": null,
"headers": {
"Accept": "application/json, text/plain, */*"
},
"_deferred": {
"promise": {}
}
},
"statusText": "Forbidden"
}
Can anyone tell me what is the reason for the forbidden 403 response? Thanks in advance
You need to provide more details. Which client are you using? From the looks of it, there is a policy that explicitly denies this upload.
It looks like you're user does not have proper permissions for that specific S3 bucket. Use AWS console our IAM to assign proper permissions, including write.
More importantly immediately invalidate the key/secret pair, and rename the bucket. Never share actual keys our passwords on open sites. Someone is likely already using your credentials as you read this.
Read the error: Invalid according to Policy: Policy Condition failed: [\"starts-with\", \"$filename\", \"\"].
Your policy document imposes a restriction on the upload that you are not meeting, and S3 is essentially denying the upload because you told it to.
There is no reason to include this condition in your signed policy document. According to the documentation, this means you are expecting a form field called filename that must not be empty. But there's no such form field. Remove this from your policy and the upload should work.
Related
I am working on a project where users can upload files into a S3 bucket, these uploaded files are mapped to a GraphQL key (which was generated by Amplify CLI), and an aws-lambda function is triggered. All of this is working, but the next step I want is for this aws-lambda function to create a second file with the same ownership attributes and POST the location of the saved second file to the GraphQL API.
I figured that this shouldn't be too difficult but I am having a lot of difficulty and can't understand where the problem lies.
BACKGROUND/DETAILS
I want the owner of the data (the uploader) to be the only user who is able to access the data, with the aws-lambda function operating in an admin role and able to POST/GET to API of any owner.
The GraphQL schema looks like this:
type FileUpload #model
#auth(rules: [
{ allow: owner}]) {
id: ID!
foo: String
bar: String
}
And I also found this seemingly-promising AWS guide which I thought would give an IAM role admin access (https://docs.amplify.aws/cli/graphql/authorization-rules/#configure-custom-identity-and-group-claims) which I followed by creating the file amplify/backend/api/<your-api-name>/custom-roles.json and saved it with
{
"adminRoleNames": ["<YOUR_IAM_ROLE_NAME>"]
}
I replaced "<YOUR_IAM_ROLE_NAME>" with an IAM Role which I have given broad access to, including this appsync access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"appsync:*"
],
"Resource": "*"
}
]
}
Which is the role given to my aws-lambda function.
When I attempt to run a simple API query in my aws-lambda function with the above settings I get this error
response string:
{
"data": {
"getFileUpload": null
},
"errors": [
{
"path": [
"getFileUpload"
],
"data": null,
"errorType": "Unauthorized",
"errorInfo": null,
"locations": [
{
"line": 3,
"column": 11,
"sourceName": null
}
],
"message": "Not Authorized to access getFileUpload on type Query"
}
]
}
my actual python lambda script is
import http
API_URL = '<MY_API_URL>'
API_KEY = '<>MY_API_KEY'
HOST = API_URL.replace('https://','').replace('/graphql','')
def queryAPI():
conn = http.client.HTTPSConnection(HOST, 443)
headers = {
'Content-type': 'application/graphql',
'x-api-key': API_KEY,
'host': HOST
}
print('conn: ', conn)
query = '''
{
getFileUpload(id: "<ID_HERE>") {
description
createdAt
baseFilePath
}
}
'''
graphql_query = {
'query': query
}
query_data = json.dumps(graphql_query)
print('query data: ', query_data)
conn.request('POST', '/graphql', query_data, headers)
response = conn.getresponse()
response_string = response.read().decode('utf-8')
print('response string: ', response_string)
I pass in the API key and API URL above in addition to giving AWS-lambda the IAM role. I understand that only one is probably needed, but I am trying to get the process to work then pare it back.
QUESTION(s)
As far as I understand, I am
providing the appropriate #auth rules to my GraphQL schema based on my goals and (2 below)
giving my aws-lambda function sufficient IAM authorization (via both IAM role and API key) to override any potential restrictive #auth rules of my GraphQL schema
But clearly something is not working. Can anyone point me towards a problem that I am overlooking?
I had similar problem just yesterday.
It was not 1:1 what you're trying to do, but maybe it's still helpful.
So I was trying to give lambda functions permissions to access the data based on my graphql schema. The schema had different #auth directives, which caused the lambda functions to not have access to the data anymore. Even though I gave them permissions via the cli and IAM roles. Although the documentation says this should work, it didn't:
if you grant a Lambda function in your Amplify project access to the GraphQL API via amplify update function, then the Lambda function's IAM execution role is allow-listed to honor the permissions granted on the Query, Mutation, and Subscription types.
Therefore, these functions have special access privileges that are scoped based on their IAM policy instead of any particular #auth rule.
So I ended up adding #auth(rules: [{ allow: custom }]) to all parts of my schema that I want to access via lambda functions.
When doing this, make sure to add "lambda" as auth mode to your api via amplify update api.
In the authentication lambda function, you could then check if the user, who is invoking the function, has access to the requested query/S3 Data.
I am trying to refresh a PBI Data Flow using an ADF web activity by authenticating using the data factory's Managed Identity.
Here is my input to the activity:
{
"url": "https://api.powerbi.com/v1.0/myorg/groups/1dec5b21-ba60-409b-80cb-de61272ee504/dataflows/0e256da2-8823-498c-b779-3e7a7568137f/refreshes",
"connectVia": {
"referenceName": "My-AzureVM-IR",
"type": "IntegrationRuntimeReference"
},
"method": "POST",
"headers": {
"Content-Type": "application/json",
"User-Agent": "AzureDataFactoryV2",
"Host": "api.powerbi.com",
"Accept": "*/*",
"Connection": "keep-alive"
},
"body": "{\"notifyOption\":\"MailOnFailure\"}",
"disableCertValidation": true,
"authentication": {
"type": "MSI",
"resource": "https://analysis.windows.net/powerbi/api"
}
}
It generates the following error when doing a debug run:
Failure type: User configuration issue
Details: {"error":{"code":"InvalidRequest","message":"Unexpected dataflow error: "}}
I have tried this exact URL in Postman using Bearer Token Authentication and it works. Our AAD Admin group said they added our ADF's Managed Identity to the permission list for the PBI API, so I am not sure what is going on here.
Just an FYI, I was able to get the ADF Managed Identity working with data flow refreshes using the HTTP request in my original post.
The key was after having the Tenant Admins add the Managed Identity to a security group with API access, I then also had to add the Managed Identity to the PBI Workspace access list as a Member.
Then my API call worked from ADF using the MSI. No Bearer login token needed.
I created a REST api gateway in AWS and configure it to pass through all requests to a http endpoint. The configuration I have is
After deploy to a stage (dev) it gives me an invoke URL, like https://xxxx.execute-api.ap-southeast-2.amazonaws.com/dev,
it works fine if I invoke the url by appending a sub path like: https://xxxx.execute-api.ap-southeast-2.amazonaws.com/dev/xxxxx`, I can see it forward the request to downstream http endpoint. However it doesn't forward any request if I invoke the base url: https://xxxx.execute-api.ap-southeast-2.amazonaws.com/dev. How can I make it work with the base invoke url without any subpath?
I tired to add an additional / path resource in API gateway but it doesn't allow me to add it.
The application must be able to receive requests at any path, including the root path: /. An API Gateway resource with a path of /{proxy+} captures every path except the root path. Making a request for the root path results in a 403 response from API Gateway with the message Missing Authentication Token.
To fix this omission, add an additional resource to the API with the path set to / and link that new resource to the same http endpoint as used in the existing /{proxy+} resource.
The updated OpenAPI document now looks like the following code example:
{
"openapi": "3.0",
"info": {
"title": "All-capturing example",
"version": "1.0"
},
"paths": {
"/": {
"x-amazon-apigateway-any-method": {
"responses": {},
"x-amazon-apigateway-integration": {
"httpMethod": "POST",
"type": "aws_proxy",
"uri": ""
}
}
},
"/{proxy+}": {
"x-amazon-apigateway-any-method": {
"responses": {},
"x-amazon-apigateway-integration": {
"httpMethod": "POST",
"type": "aws_proxy",
"uri": ""
}
}
}
}
}
I am designing and implementing a backup plan to restore my client API keys. How to go about this ?
To fasten the recovery process, I am trying to create a backup plan for taking the backup of Client API keys, probably in s3 or local. I am scratching my head from past 2 days on how to achieve this ? May be some python script or something which will take the values from apigateway and dump into some new s3 bucket. But not sure how to implement this.
You can get all apigateway API keys list using apigateway get-api-keys. Here is the full AWS CLI command.
aws apigateway get-api-keys --include-values
Remember --include-values is must to use otherwise actual API Key will not be included in the result.
It will display the result in the below format.
"items": [
{
"id": "j90yk1111",
"value": "AAAAAAAABBBBBBBBBBBCCCCCCCCCC",
"name": "MyKey1",
"description": "My Key1",
"enabled": true,
"createdDate": 1528350587,
"lastUpdatedDate": 1528352704,
"stageKeys": []
},
{
"id": "rqi9xxxxx",
"value": "Kw6Oqo91nv5g5K7rrrrrrrrrrrrrrr",
"name": "MyKey2",
"description": "My Key 2",
"enabled": true,
"createdDate": 1528406927,
"lastUpdatedDate": 1528406927,
"stageKeys": []
},
{
"id": "lse3o7xxxx",
"value": "VGUfTNfM7v9uysBDrU1Pxxxxxx",
"name": "MyKey3",
"description": "My Key 3",
"enabled": true,
"createdDate": 1528406609,
"lastUpdatedDate": 1528406609,
"stageKeys": []
}
}
]
To get API Key detail of a single API Key, use below AWS CLI command.
aws apigateway get-api-key --include-value --api-key lse3o7xxxx
It should display the below result.
{
"id": "lse3o7xxxx",
"value": "VGUfTNfM7v9uysBDrU1Pxxxxxx",
"name": "MyKey3",
"description": "My Key 3",
"enabled": true,
"createdDate": 1528406609,
"lastUpdatedDate": 1528406609,
"stageKeys": []
}
Similar to get-api-keys call, --include-value is must here, otherwise actual API Key will not be included in the result
Now you need to convert the output in a format which can be saved on s3 and later can be imported to apigateway.
You can import keys with import-api-keys
aws apigateway import-api-keys --body <value> --format <value>
--body (blob)
The payload of the POST request to import API keys. For the payload
format
--format (string)
A query parameter to specify the input format to imported API keys.
Currently, only the CSV format is supported. --format csv
Simplest style is with two fields only e.g Key,name
Key,name
apikey1234abcdefghij0123456789,MyFirstApiKey
You can see the full detail of formats from API Gateway API Key File Format.
I have implemented it in python using a lambda for backing up APIs keys. Used boto3 APIs similar to the above answer.
However, I am looking for a way to trigger the lambda with an event of "API key added/removed" :-)
I'm trying to use and enforce amazon s3 server side encryption.
I followed their documentation and I've created the following bucket policy:
{
"Version":"2012-10-17",
"Id":"PutObjPolicy",
"Statement":[{
"Sid":"DenyUnEncryptedObjectUploads",
"Effect":"Deny",
"Principal":"*",
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::YourBucket/*",
"Condition":{
"StringNotEquals":{
"s3:x-amz-server-side-encryption":"AES256"
}
}
}
]
}
I'm using python boto package, and when I'm adding x-amz-server-side-encryption header its works like a charm.
The problem is that there are several places in the application, that are using a post request from an HTML form to upload files to s3.
I've managed to add the x-amz-server-side-encryption header and the files are uploaded. However, when checking in the amazon backend console I can see that those files are not encrypted.
Does anybody have an idea what I'm doing wrong? I also tried to pass the x-amz-server-side-encryption as a form field but it doesn't help.
The interesting part is that when I remove the x-amz-server-side-encryption header, the requests are failing with "access deny" reason.
The solution was to add the x-amz-server-side-encryption to the policy object.
For example:
POLICY = """{'expiration': '2016-01-01T00:00:00Z',
'conditions': [
{'bucket': 'my_bucket'},
['starts-with', '$key', '%s/'],
{'acl': 'public-read'},
['starts-with', '$Content-Type', ''],
['content-length-range', 0, 314572800],
{'x-amz-server-side-encryption': 'AES256'}
]
}"""
And to add 'x-amz-server-side-encryption' form field with "AES256" value. There is no need to add it as a header in this case