I'm trying to have a Go Lambda function write an image to an S3 bucket that will be accessed by a public URL by the client. When I execute the function locally, with my AWS credentials in my environment, I can access the image at the s3 url ending in /image.jpg. When the lambda function runs though, it adds an Amz-signature to the URL.
The function has the IAM role AmazonS3FullAccess.
My question is how do I either:
Not have the function add this signature, so the client can access the plain URL directly.
Obtain this signature on the client side so it can be appended to the URL there.
In my Go function, I'm uploading to s3 using the s3 upload.upload() function, but would it make any difference if I used putObject() instead?
There are few different ways to get the files
You can build the url and point to the file in s3, but it required
the public access and allow cors to the specific bucket.
example: https://havecamerawilltravel.com/photographer/how-allow-public-access-amazon-bucket/
If you need to have a your own domain, you can use AWS Cloudfront to redirect to s3 bucket URL.
Using getObject() to get the file and response to the client.
Related
I understand that you can grant read/write to internal AWS account resources like lambda when you turn off public access. However what if I need to be able to read an S3 object from an external host, via the S3 URL? Sure I know I could add a public API endpoint to serve up the S3 asset. However if I use something like <img src=""/> that doesn't help me. If I try to perform a GET on the S3 url at this point, I get a 403. I'm wondering in this case that I have to leave 'public access' on?
There are two ways to access objects in private Amazon S3 buckets.
Use S3 API calls
You can use API calls using the AWS CLI or an AWS SDK. These API calls require AWS credentials that have GetObject permission to access the bucket. They do not require the bucket to be public.
Use a pre-signed URL
Alternatively, you can generate an Amazon S3 pre-signed URL, which provides time-limited access to private objects in Amazon S3. The URL can be used in <img src=...> tags.
The pre-signed URL can be generated in a few lines of code without the need to call AWS. It is basically a hashed signature that uses some AWS credentials to authorise access to the private object. This option appears most suitable for your use-case.
I have a s3 bucket that I created and I am adding objects and reading them in a client that calls the bucket via my_bucket.s3.ap-southeast-2.amazonaws.com endpoint. Is there any way to add a header (eg: Strict-Transport-Security) to the responses from S3 (using aws console or cloudformation).
I am aware that I can do this by using cloudfront to route the request. But I want to know if I can achieve this directly from S3.
I have uploaded Images in AWS S3 Bucket. How can we access those private content securely.
One possible solution is to create a lambda#edge function (possible only in the us-east-1 region) that will be triggered by your CloudFront distribution.
Specify a behavior path pattern for example "/images".
To securely access your images, add an authorizer to your lambda function,
you could use Cognito for that, and authenticate users at the front end
(sign-Up/sign-In), using AWS amplify/auth module.
I have a bucket my-bucket-name and I want to grant temporary access to some file.pdf in folder-name. As for default I get next link using boto3:
https://my-bucket-name.s3.amazonaws.com/folder-name/file.pdf?AWSAccessKeyId=<key>&Signature=<signature>&x-amz-security-token=<toke>&Expires=<time>
But also I've got a DNS alias, my.address.com is mapped to my-bucket-name.s3.amazonaws.com. Of course, if I'm using it directly I got SignatureDoesNotMatch from amazon. So I'm using next code to generate pre-signed link:
from botocore.client import Config
kwargs = {}
kwargs['endpoint_url'] = f'https://my.address.com'
kwargs['config'] = Config(s3={'addressing_style': 'path'})
s3_client = boto3.client('s3', **kwargs)
url = s3_client.generate_presigned_url(ClientMethod='get_object',
Params={
'Bucket': 'my-bucket-name',
'Key': 'folder-name/file.pdf'
},
ExpiresIn=URL_EXPIRATION_TIME)
As a result it returns me next link:
https://my.address.com/my-bucket-name/folder-name/file.pdf?AWSAccessKeyId=<key>&Signature=<signature>&x-amz-security-token=<toke>&Expires=<time>
There are two problems with this:
I don't want to expose my bucket name, so my-bucket-name/ should be ommited
This link doesn't work, I'm getting
<Code>SignatureDoesNotMatch</Code>
<Message>
The request signature we calculated does not match the signature you provided. Check your key and signing method.
</Message>
Those these are the questions:
Is it possible to achieve a workable link without exposing bucket name?
I've already read something about that custom domains are only possible for HTTP, not HTTPS access, is it true? What should I do in this case?
The DNS alias wasn't made by me, so I'm not sure if it works or is set up correctly, what should I check/ask to verify that it will be working for s3?
Currently I'm a bit lost in Amazon docs. Also I'm new to all this AWS stuff.
It is not possible to hide the bucket name in an Amazon S3 pre-signed URL. This is because the request is being made to the bucket. The signature simply authorizes the request.
One way you could do it is to use Amazon CloudFront, with the bucket as the Origin. You can associate a domain name with the CloudFront distribution, which is unrelated to the Origin where CloudFront obtains its content.
Amazon CloudFront supports pre-signed URLs. You could give CloudFront access to the S3 bucket via an Origin Access Identity (OAI), then configure the distribution to be private. Then, access content via CloudFront pre-signed URLs. Please note that the whole content of the distribution would be private, so you would either need two CloudFront distributions (one public, one private), or only use CloudFront for the private portion (and continue using direct-to-S3 for the public portion).
If the whole website is private, then you could use a cookie with CloudFront instead of having to generate pre-signed URLs for every URL.
As far as I know, you cannot have a pre-signed URL without exposing the bucket name. Yes, you cannot access a custom domain name mapped to the S3 bucket URL via https. Because when you access https://example.com and example.com is mapped to my-bucket-name.s3.amazonaws.com, it is not possible for S3 to decrypt the SSL traffic. See this AWS docs page, Limitation section.
my cloud front distribution's origin is my S3 bucket . to access a S3 bucket object we put a url in such as like "cloudfront_domainname/object_name" it should be show the object if the object is public . but in my case the cloud front URL in the URL bar redirects a S3 URL, the data retrieved from S3 not from cloud front distribution. why it cause ?
You can optionally secure the content in your Amazon S3 bucket so users can access it through CloudFront but cannot access it directly by using Amazon S3 URLs. This prevents anyone from bypassing CloudFront and using the Amazon S3 URL to get content that you want to restrict access to. This step isn't required to use signed URLs, but we recommend it.
To require that users access your content through CloudFront URLs, you perform the following tasks:
Create a special CloudFront user called an origin access identity.
Give the origin access identity permission to read the objects in your bucket.
Remove permission for anyone else to use Amazon S3 URLs to read the objects.
Please see documentation here