I have a bucket s3://my-bucket/ in where I have a lot of tenants there: s3://my-bucket/app1. s3://my-bucket/app2, s3://my-bucket/app3, etc...
I also have an AWS Cloudfront distribution with a custom domain pointing to this buckets as origin:
app1.mycloudfrontcontenturl.com/app1/images/profilePicture.png
app2.mycloudfrontcontenturl.com/app2/images/customLogo.png
The trick i'm interested to do is someone from app1 can not be able to their own files changing the host. I mean, in this scenario if you hit this someappX.mycloudfrontcontenturl.com/app1/images/profilePicture.png it works. I want to prohibit that, if the host header does not match with the app in the URL it should gives them forbidden or whatever.
Any idea that does not uses lambda#edge ?
Related
How to make aws s3 bucket public but restrict it to specific domain and localhost:3000 (for testing purpose).
Basically the s3 files will be accessed by the react.js website and we don't want the s3 files to be accessed outside the wwww.example.com domain and localhost:3000
Tried couple of things but doesn't seem to work.
Bucket policy - Not configured and not sure what to specify
Let me know the changes to be done to make it work.
How to make aws s3 bucket public but strict it to specific domain
Its not possible. At best you could Restricting access to a specific HTTP referer, but its not bullet proof. AWS writes:
Therefore, do not use aws:Referer to prevent unauthorized parties from making direct AWS requests.
You need proper authoritative mechanism and place your website behind some login screen, if you want to control access to it.
I'm trying to host a static site from S3. I had it previously set up where Route 53 was configured to redirect traffic to CloudFront, which has my public S3 bucket cached. When I make the bucket private, my whole site goes down. I had set ACLs to allow traffic from CloudFront but even with that going to my website will provide a 403 Forbidden Error.
What am I missing here? Is there a good tutorial to follow for my use case?
Thank you!
i think this would help you, https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/ (the Using a website endpoint as the origin, with access restricted by a Referer header the part) to summarise it you need to give a specific header from your cloudfront to your s3 and change the bucket policy to accept get requests only that have only this header.
The standard way to solve this is via an Origin Access Identity, which allows CloudFront to access a private S3 bucket.
I have a bucket my-bucket-name and I want to grant temporary access to some file.pdf in folder-name. As for default I get next link using boto3:
https://my-bucket-name.s3.amazonaws.com/folder-name/file.pdf?AWSAccessKeyId=<key>&Signature=<signature>&x-amz-security-token=<toke>&Expires=<time>
But also I've got a DNS alias, my.address.com is mapped to my-bucket-name.s3.amazonaws.com. Of course, if I'm using it directly I got SignatureDoesNotMatch from amazon. So I'm using next code to generate pre-signed link:
from botocore.client import Config
kwargs = {}
kwargs['endpoint_url'] = f'https://my.address.com'
kwargs['config'] = Config(s3={'addressing_style': 'path'})
s3_client = boto3.client('s3', **kwargs)
url = s3_client.generate_presigned_url(ClientMethod='get_object',
Params={
'Bucket': 'my-bucket-name',
'Key': 'folder-name/file.pdf'
},
ExpiresIn=URL_EXPIRATION_TIME)
As a result it returns me next link:
https://my.address.com/my-bucket-name/folder-name/file.pdf?AWSAccessKeyId=<key>&Signature=<signature>&x-amz-security-token=<toke>&Expires=<time>
There are two problems with this:
I don't want to expose my bucket name, so my-bucket-name/ should be ommited
This link doesn't work, I'm getting
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
Those these are the questions:
Is it possible to achieve a workable link without exposing bucket name?
I've already read something about that custom domains are only possible for HTTP, not HTTPS access, is it true? What should I do in this case?
The DNS alias wasn't made by me, so I'm not sure if it works or is set up correctly, what should I check/ask to verify that it will be working for s3?
Currently I'm a bit lost in Amazon docs. Also I'm new to all this AWS stuff.
It is not possible to hide the bucket name in an Amazon S3 pre-signed URL. This is because the request is being made to the bucket. The signature simply authorizes the request.
One way you could do it is to use Amazon CloudFront, with the bucket as the Origin. You can associate a domain name with the CloudFront distribution, which is unrelated to the Origin where CloudFront obtains its content.
Amazon CloudFront supports pre-signed URLs. You could give CloudFront access to the S3 bucket via an Origin Access Identity (OAI), then configure the distribution to be private. Then, access content via CloudFront pre-signed URLs. Please note that the whole content of the distribution would be private, so you would either need two CloudFront distributions (one public, one private), or only use CloudFront for the private portion (and continue using direct-to-S3 for the public portion).
If the whole website is private, then you could use a cookie with CloudFront instead of having to generate pre-signed URLs for every URL.
As far as I know, you cannot have a pre-signed URL without exposing the bucket name. Yes, you cannot access a custom domain name mapped to the S3 bucket URL via https. Because when you access https://example.com and example.com is mapped to my-bucket-name.s3.amazonaws.com, it is not possible for S3 to decrypt the SSL traffic. See this AWS docs page, Limitation section.
This may be a simple question, but I can't find any tutorials for it
My website all store in S3 bucket, but the front-end and back-end are stores in different buckets
In my front-end website, JS initiated a request URL use relative path, like /api/***, the request URL to be http://front-end.com/api/***.
how can I make all these requests redirect to my back-end bucket. like this:
http://back-end.com/api/***
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/redirect-website-requests.html
this doc can't seem to do this
Is there a reason you need to use different domain names to serves your content ?
To redirect from http://front-end.com/api/* to http://back-end.com/api/*, there are couple of ways:
1. Use Lambda#edge viewer request function to redirect with 301/302 with new URL.
2. Use S3 bucket for redirection.
In any of above case, you need both front-end.com and back-end.com to point to CloudFront so that you can serve them from CloudFront.
An easy way is to access all of them using front-end.com and create a cache behavior which Path pattern "/api/*" and choose the Origin bucket where you want to make the request.
I have a bucket called subdomain.domain.com that hosts code that should be used whenever users go to various subdomains.
e.g. going to:
- a.domain.com
- b.domain.com
- c.domain.com
Should go to the same bucket.
I've set the CNAME for all the subdomain URL's to go to the URL of the subdomain.domain.com bucket. The problem is that, AWS tries to look for bucket a.domain.com' instead of just going tosubdomain.domain.com' bucket
I've read some suggestions saying I can create a bucket like a.domain.com and have it redirect back to subdomain.domain.com but I don't want a URL change and I'd like to be able to upload just to one bucket and all subdomains will be updated.
Some features that appear to be "missing" in S3 are actually designed into CloudFront, which complements S3. Pointing multiple domain names to a single bucket is one of those features. It isn't possible to do this with only S3 since, as you noticed, S3 matches the hostname with the bucket name.
Create a CloudFront distribution, defining each of the desired domain names as Alternate Domain Names.
For the origin server, type in the web site endpoint hostname of the bucket, found in the S3 console. (Don't select the bucket from the dropdown list).
Point the various hostnames to CloudFront in DNS.
CloudFront will translate the incoming hostnames so that S3 serves all the domains from a single bucket, the one you specified as the origin server.
Note that this configuration also allows you to optionally use SSL with your web hosting buckets, which is another feature that S3 relies on CloudFront to implement.