Redirect to public ELB from AWS ApiGateway - amazon-web-services

Im trying to redirect a call from api gateway to a public elb in AWS. The ELB is open to the world but I cannot make it work going by the API Gateway.
API GateWayConfiguration
I get this response from the postman when I call the events operation
{
"message": "Internal server error"
}
And from AWS test console, Im getting this error:
Wed Jan 17 20:29:12 UTC 2018 : Execution failed due to configuration error: Host name 'public-elb.amazonaws.com' does not match the certificate subject provided by the peer (CN=*.confidential.com)
Wed Jan 17 20:29:12 UTC 2018 : Method completed with status: 500
I assume that the ELB is reachable because then I change to another random URL, the error code is "Invalid endpoint address".
Why am I getting this error? I only have one certificate and is the same in the url and the elb.

Your error is caused by the SSL certificate having the common name "*.confidential.com" and you are redirecting to a different name "public-elb.amazonaws.com"
The solution is to create an ALIAS (preferred) or CNAME record in DNS that maps your domain name to the ELB dns name. Then use that name in your redirect.

Related

Public Route53 CNAME Record to Private API Gateway

Background
I have a Private API Gateway attached to a VPC Endpoint which I am NOT using private DNS for. Therefore the url of the form https://${aws_api_gateway_rest_api.this.id}-${aws_vpc_endpoint.this.id}.execute-api.${var.region}.amazonaws.com/${var.stage}/${var.path} is publicly resolvable per Invoke Private API via VPC Endpoint with Route 53 Alias. This is an endpoint used by other internal clients at my org who are coming from within our private, multi-cloud network.
I can successfully invoke this URL no problem from my private network with the following:
curl -v -X POST https://${aws_api_gateway_rest_api.this.id}-${aws_vpc_endpoint.this.id}.execute-api.${var.region}.amazonaws.com/${var.stage}/${var.path}.
The Necessary Design
This URL will change whenever the api gateway id or vpc endpoint id changes. I need a static url I can provide to internal clients that will never change.
Attempted Solution
I create a route53 CNAME record named endpoint.sub.domain.com in a public hosted zone called sub.domain.com and pointed it at ${aws_api_gateway_rest_api.this.id}-${aws_vpc_endpoint.this.id}.execute-api.${var.region}.amazonaws.com to serve as a static "proxy" to always point to the underlying publicly resolvable DNS record for the Private API Gateway.
What happened
I went to curl this just like the previous one and received:
curl -v -X POST https://endoint.sub.domain.com/<stage>/<path>
but had TLS issues:
Server certificate:
* subject: CN=*.execute-api.us-east-1.amazonaws.com
* start date: Sep 19 00:00:00 2022 GMT
* expire date: Sep 16 23:59:59 2023 GMT
* subjectAltName does not match endpoint.sub.domain.com
* SSL: no alternative certificate subject name matches target host name 'endpoint.sub.domain.com'
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, close notify (256):
curl: (60) SSL: no alternative certificate subject name matches target host name 'airport-mock-endpoint.npd.nortonalto.com'
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
Tried the same with --insecure just to see:
curl --insecure -v -X POST https://endoint.sub.domain.com/<stage>/<path>
And received {"message”:”Forbidden”}.
Tried the same, this time specifying the api gateway id:
curl --insecure -v -X POST https://endoint.sub.domain.com/<stage>/<path> -H "x-apigw-api-id:5xxxxxxf"
And received:
{"message":"Missing Authentication Token"}
Tried the same, this time specifying the target of the CNAME record (the original url) as the Host:
curl --insecure -v -X POST https://endoint.sub.domain.com/<stage>/<path> -H "Host: {api-gateway-id}-{vpc-endpoint-id}.execute-api.us-east-2.amazonaws.com"
And received:
Success!
The Problem
Clients must be able to call this API without passing in the Host header AND with the same secure TLS that the original endpoint provides. How can this be accomplished?
Possible solutions
1. Have a server 302 redirect requests
Don't know if this would solve all the problems
NOT elegant as I want to accomplish this purely from DNS
2. Configure something involving Cloud Front?
3. Some Route 53 / API Gateway DNS configuration?
4. Enable private DNS in my VPC
Would solve the problem but would introduce the problem of not being able to call public api gateways from this network (as private DNS would "swallow" these queries)
Requires more network complexity I wanted to avoid by using a public DNS

AWS 403 forbidden from Cloudfront Worker Fetch

I'm getting a 403 forbidden response when using fetch from a serverless Cloudflare Worker to my own dotnetcore api hosted on AWS EC2 instance. Both GET and POST. example worker code (also tested with init headers like user agent, accept, etc but same result):
fetch('http://54.xxx.xxx.xxx/test')
However that basic fetch to that api ip url returns 200 from local javascript and a simple hosted webpage. As well as postman and curl.
Also the Cloudflare worker can fetch other apis without issue.
fetch('http://jsonplaceholder.typicode.com/posts')
In the end I had to use the AWS DNS url instead.
fetch('http://ec2-54-xxx-xxx-xxx.us-west-1.compute.amazonaws.com/test')
This AWS elasticbeanstalk setup is as basic as possible. Single t3a.nano instance with default security group. I didn't see any documentation regarding the usage of IP vs DNS urls but they should resolve to the same IP. Also I don't see any options to deal with DNS issues on cloudflare side.
Nor any similar issues on stackoverflow.
So after a lot of pain, I'm just documenting the solution here.
Under the amazon instance summary you can find both "Public IPv4 address" and "Public IPv4 DNS".
From the Cloudflare worker the fetch with public dns works
fetch('http://ec2-54-xxx-xxx-xxx.us-west-1.compute.amazonaws.com/test')
and fetch with public ip returns status 403 with statusText "Forbidden"
fetch('http://54.xxx.xxx.xxx/test')
Cloudflare Workers can make outbound HTTP requests, but only to domain names. It is not possible for a Worker to make a fetch request to an IP address.
I can't confirm the exact behavior you observed (your post was 9 months ago, and a number of things have changed with Cloudflare Workers since then), in the last month or so I've observed that calling fetch() on an IP address results in the worker seeing "Error 1003 Access Denied" as the fetch response.
There isn't much info, but here's what's available about Error 1003:
Error 1003 Access Denied: Direct IP Access Not Allowed
Common cause – A client or browser directly accesses a Cloudflare IP address.
Resolution – Browse to the website domain name in your URL instead of the Cloudflare IP address.
As you found, if you use a DNS name instead, fetch works fine.

API Gateway VPC Link Integration to NLB gives 404

I have a proxy+ resource configured like this,
NLB is internal, so using VPC Link, but when I hit the API gateway stage url, I am getting 404. Below are the logs,
(some-request-id) Sending request to http://masked.elb.us-east-1.amazonaws.com/microservice/v2/api-docs
Received response. Status: 404, Integration latency: 44 ms
But when I copy paste the same NLB URL from the log in the browser, I am getting json response back with HTTP 200.
What is that I am missing here?
This 404 is being returned from the application on your load balancer so it is definitely connecting.
I can see from your request the hostname you're specifying is an ELB name, is the application listening on this host name? Some web server services such as Apache or Nginx will hit the first vhost if they do not match one within another vhost which may not hit your application.
The domain name you specify in API Gateway should be the one it will connect to on the host, the VPC Link stores the information of which load balancer this link is for. So if your API has a VHOST for https://api.example.com you would specify https://api.example.com/{proxy}.
From your host you should be able to see within the access logs (and error logs) which host/path it is trying to load from.
It turns out that, I was pointing to wrong VPC Link. Once I pointed to correct VPC Link it started working.
Key here is that even though API Gateway logs tells me that, it is hitting http://masked.elb.us-east-1.amazonaws.com/microservice/v2/api-docs, it doesn't actually hit this URL. Instead it hits the NLB which VPC Link is attached to.
I confirmed this by changing the domain name in the Endpoint URL to,
http://domainwhichdoesnotexist.com/microservice/v2/api-docs
And in logs I see this,
Thu Jul 30 09:28:09 UTC 2020 : Sending request to http://domainwhichdoesnotexist.com/microservice/api/api-docs
Thu Jul 30 09:28:09 UTC 2020 : Received response. Status: 200, Integration latency: 72 ms

Unable to issue Let's Encrypt certificate for AWS Route 53 domain

I have a DigitalOcean droplet with Dokku running on it. I also have an AWS Route 53 hosted zone (the domain was registered elsewhere, I changed the name servers to Route 53). In that hosted zone I have created an A record pointing to my droplet.
The A record seems to work fine (I can access my Dokku container from Postman by domain):
image.
I am now trying to issue a Let's Encrypt certificate for my domain. I'm using dokku-letsencrypt for this. However, I'm receiving the following error:
CA marked some of the authorizations as invalid, which likely means it could not access http://example.com/.well-known/acme-challenge/X. Did you set correct path in -d example.com:path or --default_root? Are all your domains accessible from the internet? Please check your domains' DNS entries, your host's network/firewall setup and your webserver config. If a domain's DNS entry has both A and AAAA fields set up, some CAs such as Let's Encrypt will perform the challenge validation over IPv6. If your DNS provider does not answer correctly to CAA records request, Let's Encrypt won't issue a certificate for your domain (see https://letsencrypt.org/docs/caa/). Failing authorizations: https://acme-v02.api.letsencrypt.org/acme/authz-v3/5705758732
Challenge validation has failed, see error log.
The link provided by the error contains this:
DNS problem: SERVFAIL looking up A for gmail-bot.bloberenober.dev - the domain's nameservers may be malfunctioning
I performed a query on unboundtest.com and the response is kinda cryptic to me, but these are the last lines:
Jul 06 18:40:21 unbound[5640:0] info: Missing DNSKEY RRset in response to DNSKEY query.
Jul 06 18:40:21 unbound[5640:0] info: Could not establish a chain of trust to keys for bloberenober.dev. DNSKEY IN
Jul 06 18:40:21 unbound[5640:0] info: 127.0.0.1 gmail-bot.bloberenober.dev. A IN SERVFAIL 6.743746 0 44
full report
I did some research and found out that DNSKEY records are part of DNSSEC, and apparently it is not supported by Route 53 for existing domains:
Amazon Route 53 supports DNSSEC for domain registration. However, Route 53 does not support DNSSEC for DNS service, regardless of whether the domain is registered with Route 53. If you want to configure DNSSEC for a domain that is registered with Route 53, you must either use another DNS service provider or set up your own DNS server.
source
I have also tried running certbot manually and added a TXT record to my hosted zone, but received the similar error:
Failed authorization procedure. gmail-bot.bloberenober.dev (dns-01): urn:ietf:params:acme:error:dns :: DNS problem: SERVFAIL looking up TXT for _acme-challenge.gmail-bot.bloberenober.dev - the domain's nameservers may be malfunctioning
The domain in question is gmail-bot.bloberenober.dev
What am I doing wrong? Can I even issue a Let's Encrypt certificate for this case?
I solved this issue by changing my DNS service provider to CloudFlare instead of Route 53. They provide DNSSEC support and a generic SSL certificate out of the box which was enough for my needs, so in the end I didn't need to issue a Let's Encrypt certificate at all.

Getting 502 error when trying to set up SSL on EC2 instance via Cloudfront

I'm trying to set up an SSL certificate on an EC2 instance I've installed Wordpress on, using Cloudfront and Route 53, but I'm getting a 502 error in the browser when I head to the URL. I'm not using ELB as I'm not expecting the traffic to be very high (at least for a while). Anyone know what the issue is?
Here's the error I'm getting:
Request URL:https://react.edbiden.com/
Request Method:GET
Status Code:502
Remote Address:54.230.11.194:443
Referrer Policy:no-referrer-when-downgrade
Response Headers
content-length:587
content-type:text/html
date:Sun, 13 Aug 2017 10:45:32 GMT
server:CloudFront
status:502
via:1.1 d10e0115903b50001036753d910516ef.cloudfront.net (CloudFront)
x-amz-cf-id:YWp5HN-1zbO56PxkmH_TIBYFtQ4sO1LnvmYk4wjnrTfuXKP0RHLxnQ==
x-cache:Error from cloudfront
In Route 53 I've got A records for IP4 and IP6:
Alias: Yes
Alias Target: d2dzwf20h9q46z.cloudfront.net.
Routing: Simple
In Cloudfront:
EC2 settings:
Would be super grateful if anyone can point me in the right direction. Thanks!
Change Origin Protocol Policy to "HTTP Only".
Otherwise CloudFront tries to connect to the EC2 instance via HTTPS which will probably fail.
I think the issue is with the SSL certificate mismatch. You are using a self-signed certificate in the Origin server (EC2 instance) but as per AWS "For origins other than ELB load balancers, you must use a certificate that is signed by a trusted third-party certificate authority, for example, Comodo, DigiCert, or Symantec."
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/using-https-cloudfront-to-custom-origin.html
You can try to use Let's encrypt certificate in the instance.