I am working with a team that is using S3 to host content and they moved from a single bucket for all brands to one bucket for each brand and now we are having trouble when linking to the content from within salesforce site.com page. When I copy the link from S3 as HTTPS, I get a >"Your connection is >not private, Attackers might be trying to steal your information from >spiritxpress.s3.varsity.s3.amazonaws.com (for example, passwords, messages, or credit cards)."
I have asked them to compare the settings from the one that is working, and I don't have access to dig into it myself, and we are pretty new to this as well so thought I would see if there were any known paths to walk down. The ID and Key have not changed and I can access the content via CyberDuck, it just is not loading when reached via a link.
Let me know if additional information is needed and I will provide as quickly as I can.
[EDIT] the bucket naming convention they are using is all lowercase and meets convention guidelines as well, but it seems strange to me they way it is structured as they have named the bucket "brandname.s3.companyname" and when copying the link it comes across as "https://brandname.s3.company.s3.amazonaws.com/directory/filename" where the other bucket was being rendered as "https://s3.amazonaws.com/bucketname/......
Whoever made this change has failed to account for the way wildcard certificates work in HTTPS.
Requests to S3 using HTTPS are greeted with a certificate identifying itself as "*.s3[-region].amazonaws.com" and in order for the browser to consider this to be valid when compared to the link you're hitting, there cannot be any dots in the part of the hostname that matches the * offered by the cert. Bucket names with dots are valid, but they cannot be used on the left side of "s3[-region].amazonaws.com" in the hostname unless you are willing and able to accept a certificate that is deemed invalid... they can only be used as the first element of the path.
The only way to make dotted bucket names and S3 native wildcard SSL to work together is the other format: https://s3[-region].amazonaws.com/example.dotted.bucket.name/....
If your bucket isn't in us-standard, you likely need to use the region in the hostname, so that the request goes to the correct endpoint, e.g. https://s3-us-west-2.amazonaws.com/example.dotted.bucket.name/path... for a bucket in us-west-2 (Oregon). Otherwise S3 may return an error telling you that you need to use a different endpoint (and the endpoint they provide in the error message will be valid, but probably not the one you're wanting for SSL).
This is a limitation on how SSL certificates work, not a limitation in S3.
Okay, it appears it did boil down to some permissions that were missed and we were able to get the file to display as expected. Other issues are present, but the present one is resolved so marking as answered.
Related
TLDR: We have to trick CloudFront 307 redirect caching by creating new cache behavior for responses coming from our Lambda function.
You will not believe how close we are to achieve this. We have stucked so badly in the last step.
Business case:
Our application stores images in S3 and serves them with CloudFront in order to avoid any geographic slow downs around the globe.
Now, we want to be really flexible with the design and to be able to request new image dimentions directly in the CouldFront URL!
Each new image size will be created on demand and then stored in S3, so the second time it is requested it will be
served really quickly as it will exist in S3 and also will be cached in CloudFront.
Lets say the user had uploaded the image chucknorris.jpg.
Only the original image will be stored in S3 and wil be served on our page like this:
//xxxxx.cloudfront.net/chucknorris.jpg
We have calculated that we now need to display a thumbnail of 200x200 pixels.
Therefore we put the image src to be in our template:
//xxxxx.cloudfront.net/chucknorris-200x200.jpg
When this new size is requested, the amazon web services have to provide it on the fly in the same bucket and with the requested key.
This way the image will be directly loaded in the same URL of CloudFront.
I made an ugly drawing with the architecture overview and the workflow on how we are doing this in AWS:
Here is how Python Lambda ends:
return {
'statusCode': '301',
'headers': {'location': redirect_url},
'body': ''
}
The problem:
If we make the Lambda function redirect to S3, it works like a charm.
If we redirect to CloudFront, it goes into redirect loop because CloudFront caches 307 (as well as 301, 302 and 303).
As soon as our Lambda function redirects to CloudFront, CloudFront calls the API Getaway URL instead of fetching the image from S3:
I would like to create new cache behavior in CloudFront's Behaviors settings tab.
This behavior should not cache responses from Lambda or S3 (don't know what exactly is happening internally there), but should still cache any followed requests to this very same resized image.
I am trying to set path pattern -\d+x\d+\..+$, add the ARN of the Lambda function in add "Lambda Function Association"
and set Event Type Origin Response.
Next to that, I am setting the "Default TTL" to 0.
But I cannot save the behavior due to some error:
Are we on the right way, or is the idea of this "Lambda Function Association" totally different?
Finally I was able to solve it. Although this is not really a structural solution, it does what we need.
First, thanks to the answer of Michael, I have used path patterns to match all media types. Second, the Cache Behavior page was a bit misleading to me: indeed the Lambda association is for Lambda#Edge, although I did not see this anywhere in all the tooltips of the cache behavior: all you see is just Lambda. This feature cannot help us as we do not want to extend our AWS service scope with Lambda#Edge just because of that particular problem.
Here is the solution approach:
I have defined multiple cache behaviors, one per media type that we support:
For each cache behavior I set the Default TTL to be 0.
And the most important part: In the Lambda function, I have added a Cache-Control header to the resized images when putting them in S3:
s3_resource.Bucket(BUCKET).put_object(Key=new_key,
Body=edited_image_obj,
CacheControl='max-age=12312312',
ContentType=content_type)
To validate that everything works, I see now that the new image dimention is served with the cache header in CloudFront:
You're on the right track... maybe... but there are at least two problems.
The "Lambda Function Association" that you're configuring here is called Lambda#Edge, and it's not yet available. The only users who can access it is users who have applied to be included in the limited preview. The "maximum allowed is 0" error means you are not a preview participant. I have not seen any announcements related to when this will be live for all accounts.
But even once it is available, it's not going to help you, here, in the way you seem to expect, because I don't believe an Origin Response trigger allows you to do anything to trigger CloudFront to try a different destination and follow the redirect. If you see documentation that contradicts this assertion, please bring it to my attention.
However... Lambda#Edge will be useful for setting Cache-Control: no-cache on the 307 so CloudFront won't cache it, but the redirect itself will still need to go all the way back to the browser.
Note also, Lambda#Edge only supports Node, not Python... so maybe this isn't even part of your plan, yet. I can't really tell, from the question.
Read about the Lambda#Edge limited preview.
The second problem:
I am trying to set path pattern -\d+x\d+\..+$
You can't do that. Path patterns are string matches supporting * wildcards. They are not regular expressions. You might get away with /*-*x*.jpg, though, since multiple wildcards appear to be supported.
I have been using IAM server certificates for some of my Elastic Beanstalk applications, but now its time to renew -- what is the correct process for replacing the current certificate with the updated cert?
When I try repeating an upload using the same command as before:
aws iam upload-server-certificate --server-certificate-name foo.bar --certificate-body file://foobar.crt --private-key file://foobar.key --certificate-chain file://chain_bundle.crt
I receive:
A client error (EntityAlreadyExists) occurred when calling the UploadServerCertificate operation: The Server Certificate with name foo.bar already exists.
Is the best practice to simply upload using a DIFFERENT name then switch the load balancers to the new certificate? This makes perfect sense - but I wanted to verify I'm following the correct approach.
EDIT 2015-03-30
I did successfully update my certificate using the technique above. That is - I uploaded the new cert using the same technique as originally, but with a different name, then updated my applications to point to the new certificate.
The question remains however, is this the correct approach?
Yes, that is the correct approach.
Otherwise, you would be forced to roll it out to every system that used it at the same time, with no opportunity to test, first, if desired.
My local practice, which is I don't intend to imply is The One True Way™, yet serves the purpose nicely, is to append -yyyy-mm for the year and month of the certificate's expiration date to the end of the name, making it easy to differentiate between them at a glance... and using this pattern, when the list sorted is lexically, they're coincidentally sorted chronologically as well.
Is it possible to create a policy with multiple statements when using a CloudFront custom policy for signed cookies (not signed URLs)?
I have read the documentation, and although all the examples just have one statement, I cannot see an explicit rule regarding the number of statements allowed.
If it's not possible to have multiple policy statements, it will be difficult to give a particular user signed-cookie access to say, five random files using only the CloudFront security. Any tips on how to do that would be appreciated.
This question is cross-posted here: https://forums.aws.amazon.com/thread.jspa?threadID=223440&tstart=0
FYI
I have faced with the same problem, and contacted with the official AWS support team.
Hello, thanks for offering us a great service.
I am an software engineer from Japan.
Can we have multiple custom policies, like below syntax?
{
"Statement": [
{ ... },
{ ... },
{ ... },
]
}
I have searched on the web, and found ones who are trying to
do the same thing and forums/Q&A as well.
However we found no answer from AWS official support teams
nor documents saying about that.
JSON syntax is array, so it seems to work with
multiple statements but do not work.
So, if it does not work, would you add a sentence
about that on the official document?
And then, I got the answer yesterday:
I just heard back this morning.
You're correct, adding more than one statement
to a custom policy is not supported.
I'm updating the documentation now.
So, I think in few days the documentation will be updated that you can not set multiple policy statements for CF Custom Policy for Pre-Signed Cookies.
It's upsetting there is nothing in the docs that says you can only have one item in the Statement array, but that's AWS docs for ya!
Anyways, a way around this limitation, is to set multiple cookies at different path levels. You'll need to generate a signed cookie for each path you want and set each cookie in whatever app you are using. You can imagine an endpoint in your api that generates all of the necessary cookies, sets them all in the header, and your front end then sets all of those cookies.
More specifically you'll want to create one CloudFront-Key-Pair-Id cookie with your cloudfront access key id, and scope that cookie path to the highest level that you're policies will be set to.
Use the AWS CloudFront SDK to sign a cookie for each Resource. Create a pair of CloudFront-Policy and CloudFront-Signature cookie for each path that corresponds to the Resource path.
Say I have the following two Resources and want to give access to both of them:
https://cfsub.cloudfront.net/animals/dogs/* https://cfsub.cloudfront.net/animals/cats/*
I'd create:
1 CloudFront-Key-Pair-Id cookie with a path of /animals
1 CloudFront-Policy cookie with the base64 policy generated from running the dogs custom policy through the cloudfront signer. This cookie should have a path of /animals/dogs.
1 CloudFront-Policy same thing for cats
1 CloudFront-Signature cookie with the signature generated from running the dogs custom policy through the cloudfront signer. This cookie should have a path of /animals/cats
1 CloudFront-Signature same thing for cats
All of these cookies should have a domain set to your cloudfront domain cfsub.cloudfront.net
Send all those up to your web app or mobile app.
I can't give definite information on this subject, it is an explicit question which someone at Amazon can give relevant information on.
That said, I believe CloudFront policies may include multiple statements. Their schema is similar to IAM policies but I don't think it'll work exactly how you're expecting.
With IAM policies, you're able to attach multiple statements to one policy but they are OR'd across the statements:
Generally, each statement in a policy includes information about a single permission. If your policy includes multiple statements, a logical OR is applied across the statements at evaluation time. Similarly, if multiple policies are applicable to a request, a logical OR is applied across the policies at evaluation time... IAM Policy Documentation
In the documentation you linked to, the Statement key's value is an array which you can include multiple statements in but they'll be OR'd across them. There is further information on how the policies are evaluated which will help in limiting access to the files you're working on.
Giving access to five random files will be a challenge which I do not believe is accomplishable with CloudFront access policies alone. The conditions available aren't designed with this use case in mind.
As Rodrigo M pointed out, using the AWS API from a script can accomplish what you're attempting to do. Unfortunately that is the only route I can imagine which will accomplish what you're attempting.
If you find a way to accomplish this task using only CloudFront policies (without other AWS services), I'll be quite interested in the solution. It'd be a creative policy and quite useful.
I have similar requirement and tested AWS CloudFront with canned policy include multiple resources to restrict access to different urls.
The policy is a valid json object, it looks like below:
{
"Statement":[
{
"Resource":"https://qssnkgp0nnr9vo.cloudfront.net/foo/*",
"Condition":{
"DateLessThan":{
"AWS:EpochTime":1492666203
}
}
},
{
"Resource":"https://qssnkgp0nnr9vo.cloudfront.net/bar/*",
"Condition":{
"DateLessThan":{
"AWS:EpochTime":1492666203
}
}
}
]
}
After I signed policy and send request to CloudFront it turned out AWS CloudFront does not support it. It got a 403 response said it was a Malformed Policy.
HTTP/1.1 403 Forbidden
<?xml version="1.0" encoding="UTF-8"?><Error><Code>MalformedPolicy</Code><Message>Malformed Policy</Message></Error>
AWS officially supports only one statement in one singed policy. However, There is a workaround if you need 4 or less statements. For each statement you can create a separate pair of a CloudFront-Policy cookie and a CloudFront-Signature with its own path. The size of this pair of cookies would be around 600-900 bytes. Since the Cookie header has a limit of around 4Kb, you definitely can't use more than 5 pairs. Using 5 pairs has a high change to reach the header limit.
I've been setting up aws lambda functions for S3 events. I want to set up a new structure for my bucket, but it's not possible--so I set up a new bucket the way I want and will migrate old things and send new things there. I wanted to have some of the structure the same under a given base folder name old-bucket/images and new-bucket/images. I set up CloudFront to serve from old-bucket/images now, but I wanted to add new-bucket/images as well. I thought the behavior tab would set it such that it would check the new-bucket/images first then old-bucket/images. Alas, that didn't work. If the object wasn't found in the first, that was the end of the line.
Am I misunderstanding how behaviors work? Has anyone attempted anything like this?
That is expected behavior. An origin tells Amazon CloudFront where to obtain the data to serve to users, based upon a prefix, suffix, etc.
For example, you could serve old-bucket/* from one Amazon S3 bucket, while serving new-bucket/* from a different bucket.
However, there is no capability to 'fall-back' to a different origin if a file is not found.
You could check for the existence of files before serving the link, and then provide a different link depending upon where the files are stored. Otherwise, you'll need to put all of your files in the location that matches the link you are serving.
What I wanted to achieve is pretty simple, if you send a request to some address, the response you get is a single integer number, like 13 for example. I think it is equivalent to hosting a .html page with single number on that page and then I can parse that string in my application. (It is a Unity game, using the WWW class to send the request.)
(This is actually a version number. If it is greater than what I stored in my app I would update it and then send another request to other place and retrieve something bigger)
I am looking for the cheapest way that can handle this. I planned to use AWS but confused what component should be use? S3? EC2? Lambda? CloudFront?
If you think doing this on a web hosting or Heroku or something else is better, I also wanted to hear about it.
To serve up a simple value, S3 should do the trick.
Create a bucket in the console, using lonely lowercase letters, digits, and dashes in the name. The name has to be globally unique among all of S3, so make up something unique. We'll call the bucket name example-bucket.
Create your file on your computer with the desired contents. If plain text, call it version.txt.
In the AWS console, select the bucket, and upload the file. While clicking through the "next" screens, put a check next to "make everything public" and accept the defaults. Upload the file.
Now, go to https://example-bucket.s3.amazonaws.com/version.txt in your browser and verify (using your actual bucket name. That's your download link.
Done. As long as you don't expect to handle over about 800 requests per second, this will do exactly what you want.
Review the S3 pricing, of course.
Although this question is suitable for Server Fault,
EC2 using nginx or apache web server will be sufficient.
Put Load balancer in front of EC2 instances.