AWS 403 forbidden from Cloudfront Worker Fetch - amazon-web-services

I'm getting a 403 forbidden response when using fetch from a serverless Cloudflare Worker to my own dotnetcore api hosted on AWS EC2 instance. Both GET and POST. example worker code (also tested with init headers like user agent, accept, etc but same result):
fetch('http://54.xxx.xxx.xxx/test')
However that basic fetch to that api ip url returns 200 from local javascript and a simple hosted webpage. As well as postman and curl.
Also the Cloudflare worker can fetch other apis without issue.
fetch('http://jsonplaceholder.typicode.com/posts')
In the end I had to use the AWS DNS url instead.
fetch('http://ec2-54-xxx-xxx-xxx.us-west-1.compute.amazonaws.com/test')
This AWS elasticbeanstalk setup is as basic as possible. Single t3a.nano instance with default security group. I didn't see any documentation regarding the usage of IP vs DNS urls but they should resolve to the same IP. Also I don't see any options to deal with DNS issues on cloudflare side.
Nor any similar issues on stackoverflow.
So after a lot of pain, I'm just documenting the solution here.

Under the amazon instance summary you can find both "Public IPv4 address" and "Public IPv4 DNS".
From the Cloudflare worker the fetch with public dns works
fetch('http://ec2-54-xxx-xxx-xxx.us-west-1.compute.amazonaws.com/test')
and fetch with public ip returns status 403 with statusText "Forbidden"
fetch('http://54.xxx.xxx.xxx/test')

Cloudflare Workers can make outbound HTTP requests, but only to domain names. It is not possible for a Worker to make a fetch request to an IP address.
I can't confirm the exact behavior you observed (your post was 9 months ago, and a number of things have changed with Cloudflare Workers since then), in the last month or so I've observed that calling fetch() on an IP address results in the worker seeing "Error 1003 Access Denied" as the fetch response.
There isn't much info, but here's what's available about Error 1003:
Error 1003 Access Denied: Direct IP Access Not Allowed
Common cause – A client or browser directly accesses a Cloudflare IP address.
Resolution – Browse to the website domain name in your URL instead of the Cloudflare IP address.
As you found, if you use a DNS name instead, fetch works fine.

Related

How to view AWS site without a domain?

I have a site hosted with AWS, but the domain is not ready yet. I want to work on it and begin testing.
The site runs through a load balancer.
When I go to Load Balancers in EC2 I can see the DNS name. If I type this into my browser I get a warning that it is unsafe, then when I choose to load anyway I get an error DNS_PROBE_FINISHED_NXDOMAIN
I used the "dig A " command in terminal to get the IP address. I added this IP address to my hosts file, and I get the same error when trying to access it like that.
I get a warning that it is unsafe
It is unsafe because default ALB url does not use HTTPS. It only works with HTTP which is marked as unsecure by all major browsers.
To fix that you need to have your own domain and setup a valid, pubic SSL certificate using AWS ACM for that domain.

Cloud run service cannot resolve custom domain mapped to a different cloud run service

I am running a Go server on cloud run which makes REST HTTP calls to a different public cloud run service B. When using custom domain mapping for service B, any requests to it error out with the following:
Get https://<mydomain_name>/api/health: dial tcp: lookup <mydomain_name> on 169.254.169.254:53: no such host
However, the requests work when using the automatically allocated cloud run URL instead e.g (https://<myservice_name>-xxxxxxx-ew.a.run.app)
I am able to access the mapped domain name on the browser and I can successfully dig it from my local terminal, from instances on different google cloud projects and from the cloud shell instance. However, querying the domain name servers for the domain name on any instances on the google cloud project hosting service B does not return correct results (fails with NXDOMAIN status).
To me it seems the domain is mapped correctly but I am not sure what is preventing my attempts to access the service using the domain name in code or using curl within the same google cloud project.
Any help will be appreciated.
NXDOMAIN is the internet’s blunt way of saying “the answer to your question doesn’t exist”. Technically, it’s saying that the domain name referenced in the Domain name System(DNS) query does not exist. NXDOMAIN, which stands for non-existent domain, is an answer that only an authoritative nameserver can return.
If you issue a query for a domain name that does not exist, Google Public DNS always returns an NXDOMAIN record, as per the DNS protocol standards. The browser should show this response as a DNS error.
On the other hand, if the domain name exists, nameservers and DNS resolvers will work to return the positive NOERROR response. The specific IP address answer to the DNS query will be returned as well. (It is also possible to receive a NOERROR response without any specific answers. This happens if the domain exists, but not the DNS record type requested.)
If, instead, you receive any response other than an error message (for example, you are redirected to another page), this could be the result of the following:
A client-side application such as a browser plug-in is displaying an alternate page for a non-existent domain.
Some ISPs may intercept and replace all NXDOMAIN responses with responses that lead to their own servers. If you are concerned that your ISP is intercepting Google Public DNS requests or responses, you should contact your ISP.

AWS Internal ALB is unable to re-direct to private MWAA webserver

I am attempting to setup MWAA in AWS and the UI web server needs to be inside a private subnet. Based on documentation the way to setup access to the web server VPC endpoints requires using a VPN/Bastion/Load Balancer and I would ideally like to use the load balancer to grant users access.
I am able to see the VPC endpoint created and it is associated to an IP in each subnet (two subnets total) that were chosen during the initial environment setup.
Target groups were setup to these IP addresses with HTTPS:443.
An internal ALB was created with the target groups above.
The UI is presented in the MWAA console as a pop-out link. When accessing that I am sent sent to a page that says The site can't be reached and the URL has a syntax similar to
https://####-vpce.c71.us-east-1.airflow.amazonaws.com/aws_mwaa/aws-console-sso?login=true<MWAA_WEB_TOKEN>
If I replace the beginning of the URL with below I am able to get to the proper MWAA webpage but there are some HTTPS certificate issues which I can figure out later but this seems to be the proper landing page I need to reach.
https://<INTERNAL_ALB_A_RECORD>/aws_mwaa/aws-console-sso?login=true<MWAA_WEB_TOKEN>
If I access just the internal ALB A record in my browser
https://<INTERNAL_ALB_A_RECORD>
I get redirected to a login page for MWAA, click the login button, then I get re-directed to the below which has the This site can't be reached page.
https://####-vpce.c71.us-east-1.airflow.amazonaws.com/aws_mwaa/aws-console-sso?login=true<MWAA_WEB_TOKEN>
I am not sure exactly where the issue is but it seems to be that I am not being re-directed to where I need to go.
Should I try a NLB pointing to the ALB as a target group? Additionally when accessing an internal ALB I read that you need access to the VPC. What does this mean exactly?
Still unsure of what the root cause is for the re-direction not taking place.
Our networking does not need an ALB or NLB in this scenario since we have a DirectConnect setup established which allows us to resolve VPC endpoints if we are on our corporate VPN. I still do end up at a page with an error message regarding This site can't be reached and to get to the proper landing page I just have to hit Enter again in my browser's URL search bar.
If anyone else comes across this make sure that you have the login token appended to the end of your URL entry before hitting enter again.
I have a case open with AWS on this so I will update if they are able to figure out the root cause.

AWS ElasticSearch Request not giving response in Postman

https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-use-postman-to-call-api.html
I am following above document, to connect to my AWS Elastic Search via Postman.
What I want to achieve sent request & get the response.
I put all things related Authentication as well, still it is giving timeout error.
It is giving error 'Could not get any response'.
My Postman settings related to SSL is also correct
Sample URL :
https://vpc-abc-yqb7jfwa6tw6ebwzphynyfvaka.ap-southeast-1.es.amazonaws.com/elasticsearch_index/_search?source={"query":{"bool":{"should":[{"multi_match":{"query":"abc","fields":["name.suggestion"],"fuzziness":1}}]}},"size":10,"_source":["name"],"highlight":{"fields":{"name.suggestion":{}},"pre_tags":["\u003Cem\u003E"],"post_tags":["\u003C\/em\u003E"]}}&source_content_type=application/json
Since your ES domain is in the VPC, you can't access if from the internet. The use of security groups and "allowing port" is unfortunately not enough.
The following is written in the docs:
If you try to access the endpoint in a web browser, however, you might find that the request times out. To perform even basic GET requests, your computer must be able to connect to the VPC. This connection often takes the form of a VPN, managed network, or proxy server.
Some options to consider are:
Setup a bastion host in the VPC in its public subnet, and ssh tunnel connection from the ES to your local mac through the bastion host. This would be the easiest ad-hoc proxy solution mentioned in the docs.
Accessing the EC directly from the bastion host (e.g. remote desktop)
Setting up a proxy server to proxy all requests from the internet into the ES.
For creating and managing ES domain you can refer this documentation.
While creating ES domain in the Network configuration section, you can choose either VPC access or Public access. If you select public access, you can secure your domain with an access policy that only allows specific users or IP addresses to access the domain.
To know more about access policies, you can refer this SO answer.
So, if you create your ES domain outside VPC, in public access you can easily send request and get response through postman, without adding any Authorization.
The endpoint in the url, is the endpoint that is generated when you have created your ES domain.
To create an index
For adding data into the index
Get API to get the mapping of index created
Now, you can check it from your AWS console, that this index is created in the ES domain

Secure connection from S3 to EC2 on AWS

I'm sure this is a fairly simple question regarding EC2 and S3 on AWS.
I have a static website hosted on S3 which connects to a MongoDB server on an EC2 instance which I want to secure. Currently it's open to all of the internet 0.0.0.0/0 on port 27017, which is the MDB default. I want to restrict the inbound traffic to only requests from the S3 static web site however for security reasons. Apparently S3 does not supply fixed addresses which is causing a problem.
My only thought was to open the port to all IP ranges for the S3 region I am in. This doc on AWS explains how to find these. Although they are subject to change without notice.
http://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
Would this be the way to proceed or am I missing something obvious here. Another way to assign an IP to S3 perhaps?
S3 is a storage service, not a compute service so it cannot make a request to your MongoDB. When S3 serves static webpages, your browser will render it and when a user clicks on a link to connect to your MongoDB, the request goes to MongoDB from the user's computer.
So MongoDB sees the request coming from the user's IP. Since you do not know where the user is coming from (or the IP range), you have no choice but to listen to traffic from any IP.
I think it is not possible to allow only for your s3 hosted site to access your DB inside the ec2 since s3 does not offer an IP address for you.
So its better to try an alternative solution such as instead of directly access DB, proxy through a https service inside your ec2 and restrict the inbound traffic for your mondo db port
s3 wont request your mongodb server on ec2 instance . From my understanding your js files in browser would request the mongodb running on ec2 instance . In that case you have to add message headers in the mongodb configuration files to allow CORS .
CORS: enter link description here