AWS S3 file URL generation for user to download the file - amazon-web-services

How can we get the URL for the amazon S3 bucket files. I have tried to get the file by below format
http://s3-REGION-.amazonaws.com/BUCKET-NAME/KEY
This format will be helpful to download the file if it has public access and server side encryption is disabled.
Purpose of URL generation is to share with internal teams in my organization. This file might have exceptions of any applications.
I have to make the file or the bucket to be restricted to my organization (not for public). The bucket what ever I have server side encryption is enabled. How can we generate the file url which has server side encryption is enabled ?

You can generate a presigned URL for an S3 object: https://docs.aws.amazon.com/cli/latest/reference/s3/presign.html
Presigned URLs can be generated programmatically as well with all AWS SDKs.
For example in Java: https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURLJavaSDK.html

if you are using .net aws sdk to generate predesign url
var s3Client = new AmazonS3Client();
var request1 = new GetPreSignedUrlRequest
{
BucketName = "bucketName",
Key = "filename(original one and no coding)",
Expires = DateTime.Now.AddMinutes(5)
};
var urlString = s3Client.GetPreSignedURL(request1);

Related

AWS pre-signed URL returns Signature mismatch on new bucket

Have following code to generate pre-signed URL:
params = {'Bucket': bucket_name, 'Key': object_name}
response = s3_client.generate_presigned_url('get_object',
Params=params,
ExpiresIn=expiration)
that works fine on old one bucket I am using for last year:
https://old-bucket.s3.amazonaws.com/test_image.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIxxxxxxxxxxE%2F20210917%2Feu-north-1%2Fs3%2Faws4_request&X-Amz-Date=20210917T210448Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=54e173601fec5f140dd901b0eae1dafbcd8d7ee8b8f311fdc1b120ca447cdd0c
I can paste this URL to browser and download file. File is AWS-KMS encrypted.
But same AWS-KMS encrypted file uploaded to new one created bucket returns following URL:
https://new-bucket.s3.amazonaws.com/test_image.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIxxxxxxxxxxE%2F20210917%2Feu-north-1%2Fs3%2Faws4_request&X-Amz-Date=20210917T210500Z&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=2313e0131d4251f9fba522fc8e9880d960f674f3449e141848bd38ca19e1b528
returns SignatureDoesNotMatch error:
The request signature we calculated does not match the signature you provided. Check your key and signing method.
No any changes in source code - but just bucket name provided to generate_presigned_url function.
The IAM user I am providing to boto3.client has write/read permissions for both buckets.
Comparing properties and permissions for both buckets and for files I am requesting from buckets - everything looks the same.
GetObject and PutObject works fine for both buckets in a case of dealing with file directly. The issue is only in a case of using pre-signed URL.
So is any settings/permissions/rules/anything else need to be configured/enabled to make pre-signed URLs working with certain S3 bucket?

What bucket permissions are required to download zip file directly from URL?

I am following a tutorial here and if I take this s3 URL from the tutorial, https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/documents/wiki_gameofthrones_txt.zip, I am able to directly download the zip file to local.
When I subsistute my own zip file URL, I get an error that BadZipFile: File is not a zip file, and if I try my URL for zip file, I get permission denied instead of being able to download.
I also confirmed the zip files are formated correctly using terminal: unzip -t zipfile.zip
What permissions do I need to change in s3 or on the s3 object to allow download of zip file directly from URL?
Still very new to IAM s3 permissions and current permission are the standard ones when creating bucket.
Objects in Amazon S3 are private by default. This means that they cannot be accessed by an anonymous URL (like you have shown).
If you want a specific object to be publicly available (meaning that anyone with the URL can access it), then use the Make Public option in the S3 management console. This can also be configured at the time that the object is uploaded by specifying ACL=public-read.
If you want a whole bucket, or particular paths within a bucket, to be public, then you can create a Bucket Policy that grants access to the bucket. This requires S3 Block Public Access to be disabled.
You can also generate n Amazon S3 pre-signed URL, which provides time-limited access to a private object. The pre-signed URL has additional information added that grants permission to access the private object. This is how web applications provide access to private objects to authorized users, such as photo websites.
If an object is accessed via an AWS API call or the AWS Command-Line Interface (CLI), then AWS credentials are used to identify the user. If the user has permission to access the object, then they can download it. This method uses an API call rather than a URL.
Two solutions:
Make your bucket/file public. Check this ( Not recommended)
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"PublicRead",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject","s3:GetObjectVersion"],
"Resource":["arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"]
}
]
}
Use pre-signed URLs with SDK .. check this
var params = {Bucket: bucketname , Key: keyfile , Expires: 3600 , ResponseContentDisposition : `attachment; filename="filename.ext"` };
var url = s3.getSignedUrl('getObject', params);

S3 access to only specific folder | client builder

I have access only to the specific folder in S3 bucket.
For S3 client builder I was using the following code for uploading to S3 bucket to the specified folder.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion(region)
.build();
(I am running the code from server which has access to S3 so credentials not required,I was able to view the buckets and folders from cli)
putObjectRequest = new PutObjectRequest(awsBucketName, fileName, multipartfile.getInputStream(), null)
I even tried giving bucketname along with prefix because I have access only for the specific folder.I was getting access denied status 403.
So for S3 client builder, I am trying to use endpoint configuration rather than just specifying the region. I got the following error.
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withEndpointConfiguration(endpointConfiguration)
.withClientConfiguration(new ClientConfiguration().withProtocol(Protocol.HTTP))
.build();
com.amazonaws.SdkClientException: Unable to execute HTTP request: null
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1114)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1064)
What should I do or how should I verify if it correctly maps to the bucket and folder.
I got it when I used Presigned URL(refer aws presigned url documentation which has the example code as well for java) to upload to a folder only for which you have access (that is you dont have access for the bucket)

Amazon S3 - Access to Private Bucket

I have multiple images within a private S3 bucket and I would like an instance of Tableau to be able to access those images. Is there a URL or some way to access those images while still keeping the S3 bucket private?
Access Private Bucket through Tableau
You can setup a IAM user with access permission to S3 and allow Tableau access.
Check the article on Connect to your S3 data with the Amazon Athena connector in Tableau 10.3 for more details.
Note: You need to configure Amazon Athena for Querying the S3 content.
Custom Generated S3 Urls to Access Private Bucket
Yes. You can generate a Signed URL from your backend using AWS SDK. This can be done directly using S3 or through AWS CloudFront.
Using S3 Signed Urls. e.g, Signed Url for GET Object.
var params = {Bucket: 'bucket', Key: 'key'};
var url = s3.getSignedUrl('getObject', params);
console.log('The URL is', url);
Using CloudFront Signed Urls. e.g, Signed Url for GET in CloudFront.
var cfsign = require('aws-cloudfront-sign');
var signingParams = {
keypairId: process.env.PUBLIC_KEY,
privateKeyString: process.env.PRIVATE_KEY,
// Optional - this can be used as an alternative to privateKeyString
privateKeyPath: '/path/to/private/key',
expireTime: 1426625464599
}
// Generating a signed URL
var signedUrl = cfsign.getSignedUrl(
'http://example.cloudfront.net/path/to/s3/object',
signingParams
);
Note: Generating the Url needs to be done in your backend. You can setup a serverless solution for this by using AWS API Gateway and Lambda to provide an endpoint for authenticated users to access.
In addition you can also use AWS Cognito UserPools with Identity Pool to get direct access to S3 Private Content without the above steps. For this you need to use the Cognito UserPools or a federated identity as the identity provider which is connected with Cognito Identity Pools.

Mounting S3 Bucket with EC2 Instances using JAVA AWS SDK

Hi Cloud Computing Geeks,
Is there any way of mounting/connecting S3 bucket with EC2 instances using JAVA AWS SDK (not by using CLI commands/ec2-api-tools). I do have all the JAVA sdks required. I successfully created a bucket using Java AWS SDK, now further want to connect it with my EC2 instances being present in North Virginia region. I didn't find any way of to do it but I hope there must be some way.
Cheers,
Hammy
You don't "mount S3 buckets", they don't work that way for they are not filesystems. Amazon S3 is a service for key-based storage. You put objects (files) with a key and retrieve them back with the same key. The key is merely a name that you assign arbitrarily and you can include "/" markers in the names to simulate a hierarchy of files. For example, a key could be folder1/subfolder/file1.txt. For illustration I show the basic operations with the java sdk.
First of all you must set up your amazon credentials and get an S3 client:
AWSCredentials credentials = new BasicAWSCredentials("your_accessKey", your_secretKey");
AmazonS3Client s3client = new AmazonS3Client(credentials);
Store a file:
File file = new File("some_local_path/file1.txt");
String fileKey = "folder1/subfolder/file1.txt";
s3client.putObject("bucket_name", fileKey, file);
Retrieve back the file:
S3ObjectInputStream objectInputStream = s3client.getObject("bucket_name", fileKey).getObjectContent();
You can read the InputStream or you can save it as file.
List the objects of a (simulated) folder: See my answer in another question.