I have a proxy+ resource configured like this,
NLB is internal, so using VPC Link, but when I hit the API gateway stage url, I am getting 404. Below are the logs,
(some-request-id) Sending request to http://masked.elb.us-east-1.amazonaws.com/microservice/v2/api-docs
Received response. Status: 404, Integration latency: 44 ms
But when I copy paste the same NLB URL from the log in the browser, I am getting json response back with HTTP 200.
What is that I am missing here?
This 404 is being returned from the application on your load balancer so it is definitely connecting.
I can see from your request the hostname you're specifying is an ELB name, is the application listening on this host name? Some web server services such as Apache or Nginx will hit the first vhost if they do not match one within another vhost which may not hit your application.
The domain name you specify in API Gateway should be the one it will connect to on the host, the VPC Link stores the information of which load balancer this link is for. So if your API has a VHOST for https://api.example.com you would specify https://api.example.com/{proxy}.
From your host you should be able to see within the access logs (and error logs) which host/path it is trying to load from.
It turns out that, I was pointing to wrong VPC Link. Once I pointed to correct VPC Link it started working.
Key here is that even though API Gateway logs tells me that, it is hitting http://masked.elb.us-east-1.amazonaws.com/microservice/v2/api-docs, it doesn't actually hit this URL. Instead it hits the NLB which VPC Link is attached to.
I confirmed this by changing the domain name in the Endpoint URL to,
http://domainwhichdoesnotexist.com/microservice/v2/api-docs
And in logs I see this,
Thu Jul 30 09:28:09 UTC 2020 : Sending request to http://domainwhichdoesnotexist.com/microservice/api/api-docs
Thu Jul 30 09:28:09 UTC 2020 : Received response. Status: 200, Integration latency: 72 ms
Related
I'm getting a 403 forbidden response when using fetch from a serverless Cloudflare Worker to my own dotnetcore api hosted on AWS EC2 instance. Both GET and POST. example worker code (also tested with init headers like user agent, accept, etc but same result):
fetch('http://54.xxx.xxx.xxx/test')
However that basic fetch to that api ip url returns 200 from local javascript and a simple hosted webpage. As well as postman and curl.
Also the Cloudflare worker can fetch other apis without issue.
fetch('http://jsonplaceholder.typicode.com/posts')
In the end I had to use the AWS DNS url instead.
fetch('http://ec2-54-xxx-xxx-xxx.us-west-1.compute.amazonaws.com/test')
This AWS elasticbeanstalk setup is as basic as possible. Single t3a.nano instance with default security group. I didn't see any documentation regarding the usage of IP vs DNS urls but they should resolve to the same IP. Also I don't see any options to deal with DNS issues on cloudflare side.
Nor any similar issues on stackoverflow.
So after a lot of pain, I'm just documenting the solution here.
Under the amazon instance summary you can find both "Public IPv4 address" and "Public IPv4 DNS".
From the Cloudflare worker the fetch with public dns works
fetch('http://ec2-54-xxx-xxx-xxx.us-west-1.compute.amazonaws.com/test')
and fetch with public ip returns status 403 with statusText "Forbidden"
fetch('http://54.xxx.xxx.xxx/test')
Cloudflare Workers can make outbound HTTP requests, but only to domain names. It is not possible for a Worker to make a fetch request to an IP address.
I can't confirm the exact behavior you observed (your post was 9 months ago, and a number of things have changed with Cloudflare Workers since then), in the last month or so I've observed that calling fetch() on an IP address results in the worker seeing "Error 1003 Access Denied" as the fetch response.
There isn't much info, but here's what's available about Error 1003:
Error 1003 Access Denied: Direct IP Access Not Allowed
Common cause – A client or browser directly accesses a Cloudflare IP address.
Resolution – Browse to the website domain name in your URL instead of the Cloudflare IP address.
As you found, if you use a DNS name instead, fetch works fine.
I do have a Stripe webhook which is successfully caught and processd in Stripe's TEST MODE, on http local host server.
However, when switching to Stripe's LIVE MODE DATA, the webhook returns status code 500, while the EC2 instance is untouched, no logs being generated.
There is no issue with Signing secrets or Stripe keys, the event never reaches the HTTPS endpoint of the EC2 created using a Load Balancer.
Stripe's support cannot pronounce to this so any suggestions of why this could happen or how to handle it is very welcome.
The error displayed on Stripe is:
HTTP status code 500 (Internal Server Error)
Response Failed to connect to remote host
I have added a whitelist middleware to the express server running on EC2:
app.use((req, res, next) => {
console.log('Always inside ', req.originalUrl);
next();
});
before handling the stripe webhook URL
app.use('/afterpayment', bodyParser.raw({ type: 'application/json' }), afterPaymentRoutes);
in order to see if Stripe event reaches the server, which is not happening.
However, if i manually enter into browser the Stripe Webhook URL, domain/afterpayment, the result is as expected: whitelist middleware prints the message and webhook handler takes over.
I was having a similar problem, and watching this thread. In my case, the issues were a few different things. I'm forcing https to my site (elb is redirecting any traffic from 80 to 443). The app on my ec2 was accepting connections over port 80. Access to the site was working. I thought maybe stripe sending the webhook data to the elb was breaking because of the redirect. This wasn't the case. However, I had a security group that was only allowing access from my IP address (for testing). Changing this to 0.0.0.0/0 from the internet (actual production access) didn't completely fix the problem but I wanted to get things set up to as close as real-world as possible. In the stripe dashboard I created a new webhook pointing to the app endpoint I exposed for testing. From the Stripe dashboard I hit the "Send a test webhook" button. This time instead of getting a timeout the error was invalid signature. So, I knew that exposing the site to the internet was part of the problem., (Yes, I could have created a security group that only allowed access from the IP addresses where the webhook data originates from, but again - I wanted to keep this as close to production as possible thanks #justin-michael for the nudge in the right direction). My app was still using the test webhook I set up for development. When I created the new webhook it also created a new signing secret. I pulled this new webhook signing secret into my app then ran the "send test webhook" again and it was successful. So, allowing the correct access from Stripe and making sure the signing secret was correct fixed the problem for me.
The problem was that the domain was not properly exposed on the internet.
So I have Elastic Beanstalk environment running a node.js server app on which I set a Load Balancer and exposed the server over HTTPS.
While trying to catch a webhook sent by a 3rd party app, like Stripe, nothing arrived on the server, even though I could successfully simulate POST request to the domain endpoint. The domain was also accessible through browser (or so it seemed).
The issue was that the domain name linked to load balancer was not resolvable publicly on the internet. Here are 2 useful links:
https://www.ssllabs.com/ssltest/index.html
https://dns.google.com/query?name=&rr_type=ALL&ecs=
Running tests on them unveiled problems related to DNSSEC configuration of my domain, which was not enabled on my domain.
While following this instructions i did:
On Hosted Zones, under DNSSEC signing -> Enable DNSSEC signing.
Created KSK and Customer managed CMK
Under DNSSEC signing, copied the information from View information to create DS record
On Route 53, on Registered Domains -> on the domain -> DNSSEC status, created a new key with info from previous step
After this, all tests passed and the webhook was successfully handled.
I have the following setup for my website on windows server:
Domain registered in Route 53
EC2 Instance running on windows server
Cloud front to serve the EC2 origin using the distribution with the option to get user redirected from HTTP to HTTPs.
Public certificate deployed on cloud front.
Here is what is working:
The EC2 Origin, every page works on http protocol.
Domain access, correctly redirects user from http to https
The first website page loads without issues.
ISSUE:
The issue is the error 504 which is displayed when any of the link is clicked on the website. Here is the complete error detail:
504 ERROR
The request could not be satisfied.
CloudFront attempted to establish a connection with the origin, but either the attempt failed or the origin closed the connection. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
Generated by cloudfront (CloudFront)
I have included all the route options to accept http and https.
I have deployed a API in AWS API gateway with VPC link which connects to a ELB endpoint. There is a EC2 instance behind ELB with Spring MVC and tomcat8. The problem is the API works not very stable. When I test it in both AWS console and Postman, 4 out 10 times, it gives a 404 error. The rest will get the correct response. When I test it using ELB endpoint URL in postman, it works perfect and never throw the 404 error. After some digging, I found out when the 404 error happens, the request even didn't reach ELB, I cannot find any trace in the ELB logs or Cloudwatch. Very appreciate for any help.
HTTP Status 404 – Not FoundType Status ReportDescription The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.Apache Tomcat/8.5.32
Im trying to redirect a call from api gateway to a public elb in AWS. The ELB is open to the world but I cannot make it work going by the API Gateway.
API GateWayConfiguration
I get this response from the postman when I call the events operation
{
"message": "Internal server error"
}
And from AWS test console, Im getting this error:
Wed Jan 17 20:29:12 UTC 2018 : Execution failed due to configuration error: Host name 'public-elb.amazonaws.com' does not match the certificate subject provided by the peer (CN=*.confidential.com)
Wed Jan 17 20:29:12 UTC 2018 : Method completed with status: 500
I assume that the ELB is reachable because then I change to another random URL, the error code is "Invalid endpoint address".
Why am I getting this error? I only have one certificate and is the same in the url and the elb.
Your error is caused by the SSL certificate having the common name "*.confidential.com" and you are redirecting to a different name "public-elb.amazonaws.com"
The solution is to create an ALIAS (preferred) or CNAME record in DNS that maps your domain name to the ELB dns name. Then use that name in your redirect.