amazon server side encryption with post request - amazon-web-services

I'm trying to use and enforce amazon s3 server side encryption.
I followed their documentation and I've created the following bucket policy:
{
"Version":"2012-10-17",
"Id":"PutObjPolicy",
"Statement":[{
"Sid":"DenyUnEncryptedObjectUploads",
"Effect":"Deny",
"Principal":"*",
"Action":"s3:PutObject",
"Resource":"arn:aws:s3:::YourBucket/*",
"Condition":{
"StringNotEquals":{
"s3:x-amz-server-side-encryption":"AES256"
}
}
}
]
}
I'm using python boto package, and when I'm adding x-amz-server-side-encryption header its works like a charm.
The problem is that there are several places in the application, that are using a post request from an HTML form to upload files to s3.
I've managed to add the x-amz-server-side-encryption header and the files are uploaded. However, when checking in the amazon backend console I can see that those files are not encrypted.
Does anybody have an idea what I'm doing wrong? I also tried to pass the x-amz-server-side-encryption as a form field but it doesn't help.
The interesting part is that when I remove the x-amz-server-side-encryption header, the requests are failing with "access deny" reason.

The solution was to add the x-amz-server-side-encryption to the policy object.
For example:
POLICY = """{'expiration': '2016-01-01T00:00:00Z',
'conditions': [
{'bucket': 'my_bucket'},
['starts-with', '$key', '%s/'],
{'acl': 'public-read'},
['starts-with', '$Content-Type', ''],
['content-length-range', 0, 314572800],
{'x-amz-server-side-encryption': 'AES256'}
]
}"""
And to add 'x-amz-server-side-encryption' form field with "AES256" value. There is no need to add it as a header in this case

Related

Cannot give aws-lambda access to aws-appsync API

I am working on a project where users can upload files into a S3 bucket, these uploaded files are mapped to a GraphQL key (which was generated by Amplify CLI), and an aws-lambda function is triggered. All of this is working, but the next step I want is for this aws-lambda function to create a second file with the same ownership attributes and POST the location of the saved second file to the GraphQL API.
I figured that this shouldn't be too difficult but I am having a lot of difficulty and can't understand where the problem lies.
BACKGROUND/DETAILS
I want the owner of the data (the uploader) to be the only user who is able to access the data, with the aws-lambda function operating in an admin role and able to POST/GET to API of any owner.
The GraphQL schema looks like this:
type FileUpload #model
#auth(rules: [
{ allow: owner}]) {
id: ID!
foo: String
bar: String
}
And I also found this seemingly-promising AWS guide which I thought would give an IAM role admin access (https://docs.amplify.aws/cli/graphql/authorization-rules/#configure-custom-identity-and-group-claims) which I followed by creating the file amplify/backend/api/<your-api-name>/custom-roles.json and saved it with
{
"adminRoleNames": ["<YOUR_IAM_ROLE_NAME>"]
}
I replaced "<YOUR_IAM_ROLE_NAME>" with an IAM Role which I have given broad access to, including this appsync access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"appsync:*"
],
"Resource": "*"
}
]
}
Which is the role given to my aws-lambda function.
When I attempt to run a simple API query in my aws-lambda function with the above settings I get this error
response string:
{
"data": {
"getFileUpload": null
},
"errors": [
{
"path": [
"getFileUpload"
],
"data": null,
"errorType": "Unauthorized",
"errorInfo": null,
"locations": [
{
"line": 3,
"column": 11,
"sourceName": null
}
],
"message": "Not Authorized to access getFileUpload on type Query"
}
]
}
my actual python lambda script is
import http
API_URL = '<MY_API_URL>'
API_KEY = '<>MY_API_KEY'
HOST = API_URL.replace('https://','').replace('/graphql','')
def queryAPI():
conn = http.client.HTTPSConnection(HOST, 443)
headers = {
'Content-type': 'application/graphql',
'x-api-key': API_KEY,
'host': HOST
}
print('conn: ', conn)
query = '''
{
getFileUpload(id: "<ID_HERE>") {
description
createdAt
baseFilePath
}
}
'''
graphql_query = {
'query': query
}
query_data = json.dumps(graphql_query)
print('query data: ', query_data)
conn.request('POST', '/graphql', query_data, headers)
response = conn.getresponse()
response_string = response.read().decode('utf-8')
print('response string: ', response_string)
I pass in the API key and API URL above in addition to giving AWS-lambda the IAM role. I understand that only one is probably needed, but I am trying to get the process to work then pare it back.
QUESTION(s)
As far as I understand, I am
providing the appropriate #auth rules to my GraphQL schema based on my goals and (2 below)
giving my aws-lambda function sufficient IAM authorization (via both IAM role and API key) to override any potential restrictive #auth rules of my GraphQL schema
But clearly something is not working. Can anyone point me towards a problem that I am overlooking?
I had similar problem just yesterday.
It was not 1:1 what you're trying to do, but maybe it's still helpful.
So I was trying to give lambda functions permissions to access the data based on my graphql schema. The schema had different #auth directives, which caused the lambda functions to not have access to the data anymore. Even though I gave them permissions via the cli and IAM roles. Although the documentation says this should work, it didn't:
if you grant a Lambda function in your Amplify project access to the GraphQL API via amplify update function, then the Lambda function's IAM execution role is allow-listed to honor the permissions granted on the Query, Mutation, and Subscription types.
Therefore, these functions have special access privileges that are scoped based on their IAM policy instead of any particular #auth rule.
So I ended up adding #auth(rules: [{ allow: custom }]) to all parts of my schema that I want to access via lambda functions.
When doing this, make sure to add "lambda" as auth mode to your api via amplify update api.
In the authentication lambda function, you could then check if the user, who is invoking the function, has access to the requested query/S3 Data.

I wan't to create presigned url post, but always failed

thanks for greate packages!
I have problem when i create development with localstack using S3 service to create presignedurl post.
I have run localstack with SERVICES=s3 DEBUG=1 S3_SKIP_SIGNATURE_VALIDATION=1 localstack start
I have settings AWS_ACCESS_KEY_ID=test AWS_SECRET_ACCESS_KEY=test AWS_DEFAULT_REGION=us-east-1 AWS_ENDPOINT_URL=http://localhost:4566 S3_Bucket=my-bucket
I make sure have the bucket
> awslocal s3api list-buckets
{
"Buckets": [
{
"Name": "my-bucket",
"CreationDate": "2021-11-16T08:43:23+00:00"
}
],
"Owner": {
"DisplayName": "webfile",
"ID": "bcaf1ffd86f41161ca5fb16fd081034f"
}
}
I try create presigned url, and running in console with this
s3_client_sync.create_presigned_post(bucket_name=settings.S3_Bucket, object_name="application/test.png", fields={"Content-Type": "image/png"}, conditions=[["Expires", 3600]])
and have return like this
{'url': 'http://localhost:4566/kredivo-thailand',
'fields': {'Content-Type': 'image/png',
'key': 'application/test.png',
'AWSAccessKeyId': 'test',
'policy': 'eyJleHBpcmF0aW9uIjogIjIwMjEtMTEtMTZUMTE6Mzk6MjNaIiwgImNvbmRpdGlvbnMiOiBbWyJFeHBpcmVzIiwgMzYwMF0sIHsiYnVja2V0IjogImtyZWRpdm8tdGhhaWxhbmQifSwgeyJrZXkiOiAiYXBwbGljYXRpb24vdGVzdC5wbmcifV19',
'signature': 'LfFelidjG+aaTOMxHL3fRPCw/xM='}}
And i test using insomnia
and i have read log in localstack
2021-11-16T10:54:04:DEBUG:localstack.services.s3.s3_utils: Received presign S3 URL: http://localhost:4566/my-bucket/application/test.png?AWSAccessKeyId=test&Policy=eyJleHBpcmF0aW9uIjogIjIwMjEtMTEtMTZUMTE6Mzk6MjNaIiwgImNvbmRpdGlvbnMiOiBbWyJFeHBpcmVzIiwgMzYwMF0sIHsiYnVja2V0IjogImtyZWRpdm8tdGhhaWxhbmQifSwgeyJrZXkiOiAiYXBwbGljYXRpb24vdGVzdC5wbmcifV19&Signature=LfFelidjG%2BaaTOMxHL3fRPCw%2FxM%3D&Expires=3600
2021-11-16T10:54:04:WARNING:localstack.services.s3.s3_utils: Signatures do not match, but not raising an error, as S3_SKIP_SIGNATURE_VALIDATION=1
2021-11-16T10:54:04:INFO:localstack.services.s3.s3_utils: Presign signature calculation failed: <Response [403]>
what i missing, so i cannot create the presignedurl post ?
The problem is with your AWS configuration -
AWS_ACCESS_KEY_ID=test // Should be an Actual access Key for the IAM user
AWS_SECRET_ACCESS_KEY=test // Should be an Actual Secret Key for the IAM user
AWS_DEFAULT_REGION=us-east-1
AWS_ENDPOINT_URL=http://localhost:4566 // Endpoint seems wrong
S3_Bucket=my-bucket // Actual Bucket Name in AWS S3 console
For more information, try to read here and setup your environment with correct AWS credentials - Setup AWS Credentials

Page Level redirection for S3 hosted website

I have a website (say example.com) that is hosted on AWS S3 (bucket name - "xyz") and is serving traffic via a Cloudfront distribution. The CDN has the Origin mapped to the S3 as per usual practice to deliver the content. The DNS (Route 53) record is mapped to this CDN distribution.
I recently deleted an object from this S3 bucket, say xyz/hello/hello-jon
So when the users are trying to hit example.com/hello/hello-jon, they are getting a 404 error as expected. I'd like to redirect this to a different page that is loading from a different object in the same bucket, say, xyz/world/world-right. So that when the users try to hit the URL example.com/hello/hello-jon they should be redirected to example.com/world/world-right page.
I referred to several Amazon Docs and finally settled on this one :-
https://docs.aws.amazon.com/AmazonS3/latest/userguide/how-to-page-redirect.html
I tried the second example Example 2: Redirect requests for a deleted folder to a page. The following JSON based rule was setup in the Redirection Rules of the bucket xyz:-
[
{
"Condition": {
"KeyPrefixEquals": "hello/hello-jon/"
},
"Redirect": {
"ReplaceKeyPrefixWith": "world/world-right/"
}
}
]
And the redirection did work, but the expected result was different. I'm getting the resultant URL as:-
http://S3-bucket-name.S3-bucket-region.amazonaws.com/world/world-right/
Instead of https://www.example.com/world/world-right/
Could you please help me in resolving this issue or provide an alternative that could work in this scenario?
Do this changes :
[
{
"Condition": {
"KeyPrefixEquals": "hello/hello-jon/"
},
"Redirect": {
"HostName": "www.example.com",
"HttpRedirectCode": "301",
"Protocol": "https",
"ReplaceKeyPrefixWith": "world/world-right/"
}
}
]
Mentioned in document for redirect host.

AWS CloudFront with multiple S3 origins

I would like to configure an AWS CloudFront CDN to serve HTML static content from two AWS S3 buckets. One bucket should host the objects in the root, the second one should host objects in a specific subpath.
S3 config
The first bucket, myapp.home, should host the home page and all other objects directly under "/".
The second bucket, myapp.subpage, should be used for the same purpose but for a specific set of URLs starting with "/subpage/".
Both buckets have been configured with "static website hosting" option enabled and with a default document "index.html", which has been uploaded to both.
Both buckets have been made public using the following policy (in the case of myapp.subpage the Resource has been adapted accordingly)
{
"Version": "2012-10-17",
"Id": "Policy1529690634746",
"Statement": [
{
"Sid": "Stmt1529690623267",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::myapp.home/*"
}
]
}
CloudFront config
The CDN is configured to respond to a name "host.domain.tld".
The CDN is configured having 2 origins:
the bucket myapp.home, having these properties:
Origin Domain Name: myapp.home.s3.amazonaws.com
Origin Path: empty
Origin Type: S3 Origin
the bucket myapp.subpage, having these properties:
Origin Domain Name: myapp.subpage.s3.amazonaws.com
Origin Path: empty
Origin Type: S3 Origin
These origins are linked to 2 Cache Behaviors:
First Behavior
Origin: the bucket myapp.subpage:
Precedence: 0
Path Pattern: subpage/*
Second Behavior
Origin: the bucket myapp.home:
Precedence: 1
Path Pattern: Default (*)
The problem
The myapp.home origin seems to work correctly, but myapp.subpath instead always returns an AccessDenied error using all of the following URIs:
host.domain.tld/subpath
host.domain.tld/subpath/
host.domain.tld/subpath/index.html
Update: I also tried substituting the origins using the S3 website domains, e.g. myapp.subpath.s3-website-eu-west-1.amazonaws.com, instead of the plain buckets domains: the homepage still works anyway, but the subpath this time returns a 404 with Message: "The specified key does not exist" for all URIs above.
What am i doing wrong?
Thanks in advance
The "subpage/*" in first behaviors is the directory in myapp.subpage.
Make a directory named subpage in the bucket, then put index.html into this bucket.
Like below:
* myapp.subpage <bucket name>
* subpage <directory>
* index.html

uploading files to amazon web server

I am trying to upload files using amazon web services, but I am getting this error as shown below, because of which the files are not being uploaded to the server:
{
"data": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>AccessDenied</Code><Message>Invalid according to Policy: Policy Condition failed: [\"starts-with\", \"$filename\", \"\"]</Message><RequestId>7A20103396D365B2</RequestId><HostId>xxxxxxxxxxxxxxxxxxxxx</HostId></Error>",
"status": 403,
"config": {
"method": "POST",
"transformRequest": [null],
"transformResponse": [null],
"url": "https://giblib-verification.s3.amazonaws.com/",
"data": {
"key": "#usc.edu/1466552912155.jpg",
"AWSAccessKeyId": "xxxxxxxxxxxxxxxxx",
"acl": "private",
"policy": "xxxxxxxxxxxxxxxxxxxxxxx",
"signature": "xxxxxxxxxxxxxxxxxxxx",
"Content-Type": "image/jpeg",
"file": "file:///storage/emulated/0/Android/data/com.ionicframework.giblibion719511/cache/1466552912155.jpg"
},
"_isDigested": true,
"_chunkSize": null,
"headers": {
"Accept": "application/json, text/plain, ​*/*​"
},
"_deferred": {
"promise": {}
}
},
"statusText": "Forbidden"
}
Can anyone tell me what is the reason for the forbidden 403 response? Thanks in advance
You need to provide more details. Which client are you using? From the looks of it, there is a policy that explicitly denies this upload.
It looks like you're user does not have proper permissions for that specific S3 bucket. Use AWS console our IAM to assign proper permissions, including write.
More importantly immediately invalidate the key/secret pair, and rename the bucket. Never share actual keys our passwords on open sites. Someone is likely already using your credentials as you read this.
Read the error: Invalid according to Policy: Policy Condition failed: [\"starts-with\", \"$filename\", \"\"].
Your policy document imposes a restriction on the upload that you are not meeting, and S3 is essentially denying the upload because you told it to.
There is no reason to include this condition in your signed policy document. According to the documentation, this means you are expecting a form field called filename that must not be empty. But there's no such form field. Remove this from your policy and the upload should work.