Adding a response header to S3 bucket calls - amazon-web-services

I have a s3 bucket that I created and I am adding objects and reading them in a client that calls the bucket via my_bucket.s3.ap-southeast-2.amazonaws.com endpoint. Is there any way to add a header (eg: Strict-Transport-Security) to the responses from S3 (using aws console or cloudformation).
I am aware that I can do this by using cloudfront to route the request. But I want to know if I can achieve this directly from S3.

Related

How to securely access the content in AWS cloudfront by using react native

I have uploaded Images in AWS S3 Bucket. How can we access those private content securely.
One possible solution is to create a lambda#edge function (possible only in the us-east-1 region) that will be triggered by your CloudFront distribution.
Specify a behavior path pattern for example "/images".
To securely access your images, add an authorizer to your lambda function,
you could use Cognito for that, and authenticate users at the front end
(sign-Up/sign-In), using AWS amplify/auth module.

AWS Lambda function appends X-Amz-Signature to S3 URL

I'm trying to have a Go Lambda function write an image to an S3 bucket that will be accessed by a public URL by the client. When I execute the function locally, with my AWS credentials in my environment, I can access the image at the s3 url ending in /image.jpg. When the lambda function runs though, it adds an Amz-signature to the URL.
The function has the IAM role AmazonS3FullAccess.
My question is how do I either:
Not have the function add this signature, so the client can access the plain URL directly.
Obtain this signature on the client side so it can be appended to the URL there.
In my Go function, I'm uploading to s3 using the s3 upload.upload() function, but would it make any difference if I used putObject() instead?
There are few different ways to get the files
You can build the url and point to the file in s3, but it required
the public access and allow cors to the specific bucket.
example: https://havecamerawilltravel.com/photographer/how-allow-public-access-amazon-bucket/
If you need to have a your own domain, you can use AWS Cloudfront to redirect to s3 bucket URL.
Using getObject() to get the file and response to the client.

Cloudfront Origin Group detect failver using Lambda#Edge

My setup:
I have a Cloudfront Origin Group where Bucket A is a primary bucket and Bucket B is a secondary bucket. Lambda#Edge is added on origin-request to do a certain process.
Whenever a request comes to Cloudfront, my Lambda#Edge modifies it to match the folder structure of my bucket and returns file accordingly.
If Bucket A doesn't have a certain file it throws an error and Cloudfront failover requests the file from Bucket B. Bucket B doesn't have the same structure as Bucket it, it should return the file from unmodified file path in the bucket.
Example:
My Original Request: /somefile.html
Lambda#Edge modifies this request to get the file from Bucket A to: /en/somefile.html
If Bucket A doesn't have this somefile.html then this request goes to Bucket B. It should return file from the originally requested path: /somefile.html and not /en/somefile.html
The above scenario is very simple, my original scenario is much complex. Basically Bucket A file path is processed path while Bucket B should return file from an originally requested path.
What I want:
Using Lambda#Edge how can I detect if the request is on Bucket A or bucket B?
What I have tried:
I tried adding certain header in request headers and check if the header exists then its request to Bucket B. But this doesn't seem to be working.
The hostname of the origin that CloudFront will try to contact after an origin-request trigger returns control can be found in one of two places.
If you are using what CloudFront calls an S3 origin (the REST interface to S3) it will be here:
event.Records[0].cf.request.origin.s3.domainName
If you are using what CloudFront calls a custom origin -- which includes S3 website hosting endpoints as well as any other origin server that isn't S3 REST, it's here:
event.Records[0].cf.request.origin.custom.domainName
These can be used to determine which of two origins in an origin group will receive the request, next.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-event-structure.html#lambda-event-structure-request

How to create session token for my file on AWS S3 if it connected to other CDN provider?

I have some video file stored on AWS S3 and I want to secure these files by adding session token with specific expiry time on the video URLs for preventing unauthorized access.
I know the s3 SDK can generate temporary credentials with Cloudfront to achieve this.
However, if I connected S3 to other CDN provider such as cloudflare. Will these temporary credentials work perfectly?
For example, my video file is stored on s3 -> http://files.video.com.s3.amazonaws.com/video.mp4
The cloudflare cdn url is -> http://files.video.com/video.mp4
If i generated temporary credentials for the file and access the url -> http://files.video.com/video.mp4?token=4180da90a6973bc8bd801bfe49f04a&expirey=1526231040535
Will it work?
It sounds like you're referring to S3 Presigned URLs. No, if your S3 bucket is private, CloudFlare will not be able to generate presigned URLs to access your files. AWS CloudFront uses an Origin Access Identity to resolve this issue. However with 3rd party CDNs, this is not possible.
There are 2 ways you could achieve better security (source).
Make your bucket public but restrict the allowed IPs for your S3 bucket to only CloudFlare IPs.
Make your bucket private and use CloudFlare workers to authorize its GET requests

How to allow CloudFront to access certain S3?

I'm new to AWS. Now I want to write a java code to use CloudFront to access a s3.
I create a S3 bucket. But I don't know how to get cloudfront object using S3 credentials.
I read the AWS JAVA API, it seems that the code should be in this form:
AmazonCloudFrontClient cloudfront = new AmazonCloudFrontClient(credentials);
CreateCloudFrontOriginAccessIdentityRequest originRequest = new CreateCloudFrontOriginAccessIdentityRequest();
originRequest.setRequestCredentials(credentials);
cloudfront.createCloudFrontOriginAccessIdentity(originRequest);
But I don't see a S3ID or something to set the S3 to the cloudfront.
If you want to serve your S3 files with CloudFront, it usually means that you want the S3 bucket to be publicly available. You can simply define your objects and bucket as public though S3 interfaces (Web console, API, or 3rd party tools as CloudBerry or Bucket Explorer).
You can also set it with the Java SDK
Statement allowPublicReadStatement = new Statement(Effect.Allow)
.withPrincipals(Principal.AllUsers)
.withActions(S3Actions.GetObject)
.withResources(new S3ObjectResource(myBucketName, "*"));
Policy policy = new Policy()
.withStatements(allowPublicReadStatement);
AmazonS3 s3 = new AmazonS3Client(myAwsCredentials);
s3.setBucketPolicy(myBucketName, policy.toJson());
If you want to serve private files, you can check the documentations here: http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html