Lambda function custom domain - amazon-web-services

I have been messing around with AWS lambda today trying some things out. I am currently trying to trigger the function from a url in a browser.
The URL looks similar to this: https://abcdef.execute-api.eu-west-2.amazonaws.com/default/test
As I understand it I can assign a custom domain to my endpoint, but can I also get rid of the path part of the url, so for example:
GET: https://example.com/
GET: https://example.com/somefile.txt
POST: https://example.com/ ['some_post_field' => 'some data']
Will all be passed to my function, or do I need to configure an EC2 instance with NGINX to proxy-pass the requests to lambda?
Any thoughts would be useful.

There are now a couple different ways you can accomplish this in AWS:
The newest (arguably coolest!) is to use Cloudfront to run your code using their Lambda#Edge service. You can completely customize your URL path and have portions used as variables like any other REST endpoint. You attach your Lambda fn to "behaviour" endpoints which give you full URL control. Its fairly deep and beyond the scope of your question to explain it all here, but read through the docs at the link provided and you'll likely see lots of stuff you like.
Another older, more expensive but more documented method is to use AWS's API Gateway as you have eluded to in your question's tags. It has a great front end console and is easy to connect API endpoints to your Lambda backend logic by attaching them to REST methods. The console helps you "variable-ize" your URL with form field data. This service helps you the most with custom domains to trigger from. Setting up custom domains is a snap in API Gateway. Be sure to use AWS's SSL Certificate Manager for free SSL certs on your custom domain too!
How you specifically setup your endpoints depends on which service you choose. Personally, given your desire to serve up different types of content, I would lean towards CloudFront, and define a "behaviour" URL for your dynamic Lambda content. If the URL request does not match one of your defined behaviours, it defaults to the Cloudfront cache/origin to serve your static assets (somefile.txt). Only matches go to your attached Lambda fn with form data. Very slick!
A lot of example Lambda#Edge fn's are available here.
I have used both and have clients on both now. Lambda#Edge is ridiculously faster and less expensive, BUT is less documented, has a steeper learning curve, and console is not nearly as helpful. I would honestly try both to see which fits your situation and experience level best. Both will get the job done. EC2 is most definitely NOT needed (nor desired perhaps). Hope that helps — good luck!

Instead of directly exposing the Lambda function via URL, expose it through AWS API Gateway where you can define your own paths and map to a Domain.

Related

Is there some equivalent of x-sendfile or x-accell-redirect for S3?

I'm building an API and for some responses it will stream the content of S3 objects back to the requester. I would prefer to serve the content directly rather than redirect to send a 302 (e.g. to redirect to a cloudfront distro).
The default is that I read the file into the application and then stream it back out.
If I were using apache or nginx with a local file system I could ask the reverse proxy to stream the content directly from disk with X-Sendfile or X-Accel-Redirect.
Is there an AWS-native mechanism for doing this, so I can avoid loading the file into the application and serving back out again?
I’m not entirely sure I understand your scenario correctly, but I’m thinking in the following direction:
Generally, Cloudfront works like a reverse proxy with a cache attached. (Unlike other vendor’s products where you would “deploy on” the CDN.)
You can attach different types of origins to Cloudfront, it has native support for S3 buckets, but basically everything that speaks HTTP can be attached as a custom origin.
So, in the most trivial scenario, you would place your S3 bucket behind the Cloudfront, add an Origin Access Policy (OAI) and a bucket policy which permits the OAI to access your content.
In order to benefit from caching on the Cloudfront edge, you will need to configure it appropriately, otherwise it will just be a proxy. Make sure to set the Cloudfront TTLs for your content. Check how min/max/default TTL work.
But also don’t forget to set headers for your clients to cache (Cache-Control etc); this may save you a lot of money if the same clients need the same content over and over again.
As we know, caching and cach invalidation in particular, are tricky. Make sure to understand how Cloudfront handles caching to not run into problems. For example: cache busting with query parameters does work, but you need to make Cloudfront aware that the query sting is significant.
Now here comes the exciting part: If you need to react dynamically to the request of the client, you have Lambda#Edge and Cloudfront Functions at your disposal.
Lambda#Edge is basically what it says; Lambda functions on the edge. They can work in four modes: Client request, origin request, origin response, client response. Depends what you need to modify; incoming vs. outgoing data and client-Cloudfront vs. Cloudfront-origin communication.
CF Functions are pretty limited (ES5 only, no XHR or anything, only works on viewer request/response) but very cheap at the same time. Check the AWS docs to determine what you need.
FWIW, Cloudfront also supports signed cookies and signed URLs in case you need to restrict the content to particular viewers.

How can I only allow a specific origin to access content from Cloudfront/S3 Origins when requested via iFrame?

Here is an image of the general idea I want to accomplish
I have a react application that is hosted as a Zendesk app via an iFrame from subdomain.zendesk.com, the iFrame fetches the content from Cloudfront / S3 (using S3 Origins) and displays it within the Zendesk UI.
I'm trying to secure it and want to restrict access to the content to a specific origin (subdomain.zendesk.com for example) so that if anyone was to view the Cloudfront distribution directly (by navigating to xxxx.cloudfront.net) it would reject the request.
How can this be achieved? I have tried using AWS WAF and creating a rule that looks at the request origin header and matches it against the subdomain url (example origin: subdomain.zendesk.com) but that doesn't work so I think i'm barking up the wrong tree using that.
I have also tried creating a custom origin request policy on the distributions behaviour but again that didn't yield any results.
Zendesk does offer signed url functionality where the initial request becomes a POST request to the server that contains a JWT as form data in the request payload, I read that it might be possible to use Lambda#edge to accomplish this, I tried to implement this but I have not had any luck so far.
Any tips, examples or outlines as to what I am misunderstanding about these services would be very much appreciated.
In order to get a better support from the community, share the specific use-cases in your question and share in detail what you tried and what are the errors.
There are various ways to achieve what you mentioned in the picture:
Create multiple CloudFront Distributions for each domain and they can have either same or unique origins as per the need
Instead of domain, redirect traffic using "paths" or "routes" for e.g.: same-domain.com/path1 same-domain.com/path2 etc
Use Lambda#Edge and redirect the traffic based on domains
you can't have redirection (Behaviours functionality of CloudFront) using multiple domains

API Gateway Returns Forbidden when string with "https://" is Posted

I have an API Gateway endpoint setup that uses a Lambda function to store a URL in DynamoDB. When I POST a message with this in the body
"videoURL": "www.youtube.com/watch?v=cgpvCVkrV6M"
the endpoint works fine. It returns 200 and the DynamoDB record is updated. However, when I POST this
"videoURL": "https://www.youtube.com/watch?v=cgpvCVkrV6M"
the endpoint returns a 403 Forbidden response and the DB record is not updated.
When I test inside API Gateway, the "https://" string is accepted.
I also have an API Key, a Usage Plan, a Client Certificate, and CORS Enabled (for local testing). I don't think any of these are the cause of my problem.
Does anyone have a guess as to why the "https://" string is causing a problem?
The problem was in my Web Application Firewall (WAF). When I created my firewall, I added the AWS-AWSManagedRulesCommonRuleSet collection. According to the documentation of this rule set, one of the rules is:
GenericRFI_BODY - Inspects the values of the request body and blocks requests attempting to exploit RFI (Remote File Inclusion) in web applications. Examples include patterns like ://.
Disabling this rule solved my problem. I can now successfully send in and store "https://" in my database.
However, this rule represents a best practice (or at least a good practice), and should not be disabled without considering the risk. By disabling this rule, I make my endpoint vulnerable Remote File Inclusion attacks. Since I have access to the endpoint and Lambda function definition, I could split my URL input in to two fields ("https" and "www.youtube...") and keep the rule enabled. For anyone else encountering this issue, you'll have to weigh the ease vs. risk of each approach.

API Gateway Proxy Without URL Redirection

I’m using AWS API Gateway at https://console.aws.amazon.com/apigateway/home
I did all of the steps to set up a proxy for http://foo.com (example)
I deployed it and the URL is http://bar.com (example)
When I go to http://bar.com/hello, it redirects me to http://foo.com/hello
I want it to stay at http://bar.com/hello, but deliver the contents from http://foo.com/hello like a normal proxy service
Note: My primary intent is to get around CORS issues with a service
It seems to me that whatever service you're using is forcing the redirect like #Steve's comment mentioned. They might be forcing HTTP_REFERER to be a certain domain.
Since I don't know what service you're calling this is just a guess.

AWS Api Gateway + Lambda + custom domain (Route53) Missing Authentication Token issue

I am aware that many similar questions have been posted and answered here but none of them is quite the same with what I am experiencing.
I have a Lambda function that handles incoming requests (GET and POST). I also set up an api gateway as public facing endpoint. Additionally, I set up custom domain following Set up Custom Domain Name for API Host Name
The testing call works in both of lambda and api gateway console. Everything also works using the invoke URL but not with the custom domain I've set up.
Here are some more details:
Invoke URL (Works) :
https://{api gateway id}.execute-api.us-west-2.amazonaws.com/prod/endpoint
Custom domain endpint (Doesn't work):
https://api.{my domain}.com/endpoint
Base Path Mapping:
/endpoint endpoint:prod
All Method Auth:
Authorization None
API Key Not required
Route53:
A record as alias that points api.{my domain}.com to the cloudfront distribution domain name as alias target.
I'd really appreciate if anyone knows what's going out here.
I had met the same question several years ago and solved it by removing the 'stage' name from the URL.
the URL of gateway API seems like the following:
https://{id}.execute-api.{region}.amazonaws.com/{stage}/todos
if you have routed a custom domain https://api.xxx.com to gateway API {apiName}:{stage}, it seems like the following:
https://api.xxx.com
path: /
target: {apiName}:{stage}
Finally, the correct way to call it is to remove the stage name:
// **remove stage name!!!!**
// Right
https://api.xxx.com/todos
// Wrong
https://api.xxx.com/{stage}/todos
I found the issue is misunderstanding of how base path mapping works.
All my configurations are correct.
My API resource is not under / but under /endpoint
To use the custom domain, instead of visiting https://api.{my domain}.com/endpoint, it needs to go to https://api.{my domain}.com/endpoint/endpoint
Of course this is silly and redundant.
I have two options. I either set up the base path mapping to / instead of /endpoint or I can just user the API resource / instead of /endpoint.
I go with the latter because if base path mapping is set to /, my api.{my domain}.com will only be able to host just one API (I can still use resources under the same API, but why wasting the extra layer of abstraction?).
This seems dump but I am still glad I figured it out.
Another reason for this can be that your user, although admin, does not have a bloody CloudFrontFullAccess permissions! I just spent a couple of hours on it as I relied on serverless to do it for me and it worked perfectly on another project with different credentials, though. So double check the article! https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-custom-domains.html
Step 1: Map the A record from subdomain.yourdomain.com to API Custom domain/API Gateway domain name(API Gateway -> Custom domain names -> tab Configuration/Endpoint Configuration).
Step 2: From API Gateway/ API Custom domain - add the api mapping. Leave "path" empty.
End point format:
Original endpoint: https://{api gateway id}.execute-api.us-west-2.amazonaws.com/prod/endpoint
Endpoint with API custom domain: https://api.yourdomain.com/**endpoint**