AWS S3 - Public file access to EC2 thru URL - amazon-web-services

I am using AWS S3 for storing pdf files, docs and other files and running my Backend java application and front end react application on EC2.
My requirements,
EC2 backend application should have access to the S3 bucket for put and get objects using the API or the Java client
EC2 front end application should be able to access the objects using the S3 URL
EC2 back end application should be able to access the objects using S3 URL
Added IAM policy to get EC2 instance access to the S3 bucket and it works for put and get using the API.
For, accessing objects using S3 URL, enabled pUblic access, but limiting only to the front end domain, so that nobody else can access the URL directly and it will be only thru the Front end.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originating from www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-uat/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.example.com:9000/*",
]
}
}
}
]
}
This is working as expected. Now, I am able to access the files using S3 URL only within the front end app and not outside the front end app.
Now, I need EC2 instance also access S3 objects using the URL. Not sure, How can I add EC2 instance access in this policy. My EC2 instance count can be changing from 2 to 5,6,.... So want to make it generic so that all my current and future backend EC2 instances can access thru URL.

Related

Allow EC2 instance to access S3 bucket

I've got an S3 bucket with a few files in. Public access disabled
I've also got an EC2 instance which I want to be able to access all files in the bucket.
I created a role with permissions like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::mybucketname"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::mybucketname/*"
]
}
]
}
I assigned the role to my EC2 instance, but I still get 403 forbidden if I try and access a file in the bucket from my EC2 instance.
Not sure what i've done wrong.
Thanks
When accessing private objects in an Amazon S3 bucket, it is necessary to provide authentication to prove that you are permitted to access the object.
It would appear that you are attempting to access the file without any authentication information, by simply accessing the URL: mybucket.s3.eu-west-2.amazonaws.com/myfile
If you wish to access an object this way, you can create an Amazon S3 pre-signed URL, which provides time-limited access to a private object in Amazon S3. It appends a 'signature' to the URL to prove that you are authorised to access it.
Alternatively, you could access the object via the AWS Command-Line Interface (CLI), or via the AWS SDK in your preferred programming language. This way, API requests will be authenticated against the S3 service.

s3 bucket policy to access object url

What is s3 bucket policy permission to provide an IAM user to access object url which is basically an HTTPs url for the object that i have uploaded to S3 bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::bucket"
},
{
"Sid": "GetObject",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::bucket/*"
}
]
}
With above policy i can download the object into my local , but i cant access it with object url which includes Https link. If i keep the s3 bucket full public , only then i can have the https access to the object url.
I dont want to provide full public access and how to provide access to this with bucket policy?
You can get https url by generating s3 pre-signed urls for the objects. This will allow for temporary access using the urls generated.
Other than that, a common choice is to share your s3 objects with an outside world without making your bucket public using CloudFront as explained in:
Amazon S3 + Amazon CloudFront: A Match Made in the Cloud
Objects in Amazon S3 are private by default. They are not accessible via an anonymous URL.
If you want a specific IAM User to be able to access the bucket, then you can add permissions to the IAM User themselves. Then, when accessing the bucket, they will need to identify themselves and prove their identity. This is best done by making API calls to Amazon S3, which include authentication.
If you must access the private object via a URL, then you can create an Amazon S3 pre-signed URL, which is a time-limited URL that provides temporary access to a private object. This proves your ownership and will let S3 serve the content to you. A pre-signed URL can be generated with a couple of lines of code.

Private S3 static website accessed only by API Gateway

I want to set up a static S3 website that is only accessible via API Gateway, so what I've done is.
S3 side
1. Enabled static website hosting on the S3 bucket.
2. Blocked all public access.
3. Put in a bucket policy that only allows my VPC Endpoint to access it.
{
"Version": "2012-10-17",
"Id": "VPCe",
"Statement": [
{
"Sid": "VPCe",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket.com/*",
"Condition": {
"StringNotEquals": {
"aws:SourceVpce": "vpce-my-vpce"
}
}
}
]
}
API Gateway side
1. Mapped that same VPCE to the API
2. Created a proxy resource
3. In the integration request, I made it HTTP and put my S3 website URL as the endpoint URL, content handling as passththrough.
4. But when I test this through APIGW, I get access denied.
Is there something I'm missing, or am I wrong to expect this to work?
I get a 403, Access Denied on this.
I want to set up a static S3 website that is only accessible via API Gateway,
You can't do this, as its not possible. S3 static websites are only accessible through public URL, thus you need to access it from the internet.
They are not meant to be accessed from VPC using private IP addresses or any VPC endpoints.
If you want to have private static website, you have to host it yourself on private EC2 instance or ECS container.

Its possible to use AWS Athena using a VPC endpoint?

I would like to know if it is possible to create a VPC endpoint for AWS Athena and restrict to only allow certain users (that MUST BE in my account) to use the VPC endpoint. I currently use this VPC endpoint policy for a S3 endpoint and I would need something similar to use with AWS Athena.
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<MY_ACCOUNT_ID>:user/user1",
"arn:aws:iam::<MY_ACCOUNT_ID>:user/user2",
...
]
},
"Action": "*",
"Resource": "*"
}
]
}
The problem I'm trying to solve is to block developers in my company, that are logged in a RDP session inside my company VPN, to offload data to a personal AWS account. So I would need a solution that blocks access to the public internet, so I think a VPC endpoint should be the only option, but I accept new ideas.
Yes you can, check out this doc.
https://docs.aws.amazon.com/athena/latest/ug/interface-vpc-endpoint.html
Also, keep in mind to adopt a encryption at rest and transit when query data via athena, the results always by default is open even if it's saved on a encrypted s3 bucket.

Amazon S3 Bucket policy, allow only one domain to access files

I have a S3 bucket with a file in it. I only want a certain domain to be able to access the file. I have tried a few policies on the bucket but all are not working, this one is from the AWS documentation.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originated from www.example.com and example.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket-name/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.phpfiddle.org/*",
"http://phpfiddle.org/*"
]
}
}
}
]
}
To test the file, i have hosted a code on phpfiddle.org and have this code. But i am not able to access this file neither by directly accessing from the browser nor by the phpfiddle code.
<?php
$myfile = file_get_contents("https://s3-us-west-2.amazonaws.com/my-bucket-name/some-file.txt");
echo $myfile;
?>
Here are the permissions for the file, the bucket itself also has the same permissions + the above policy.
This is just an example link and not an actually working link.
The Restricting Access to a Specific HTTP Referrer bucket policy is only allow your file to be accessed from a page from your domain (the HTTP referrer is your domain).
Suppose you have a website with domain name (www.example.com or example.com) with links to photos and videos stored in your S3 bucket, examplebucket.
You can't direct access your file from your browser (type directly the file URL into browser). You need to create a link/image/video tag from any page in your domain.
If you want to file_get_contents from S3, you need to create a new policy to allow your server IP (see example). Change the IP address to your server IP.
Another solutions is use AWS SDK for PHP to download the file into your local. You can also generate a pre-signed URL to allow your customer download from S3 directly for a limited time only.