https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-use-postman-to-call-api.html
I am following above document, to connect to my AWS Elastic Search via Postman.
What I want to achieve sent request & get the response.
I put all things related Authentication as well, still it is giving timeout error.
It is giving error 'Could not get any response'.
My Postman settings related to SSL is also correct
Sample URL :
https://vpc-abc-yqb7jfwa6tw6ebwzphynyfvaka.ap-southeast-1.es.amazonaws.com/elasticsearch_index/_search?source={"query":{"bool":{"should":[{"multi_match":{"query":"abc","fields":["name.suggestion"],"fuzziness":1}}]}},"size":10,"_source":["name"],"highlight":{"fields":{"name.suggestion":{}},"pre_tags":["\u003Cem\u003E"],"post_tags":["\u003C\/em\u003E"]}}&source_content_type=application/json
Since your ES domain is in the VPC, you can't access if from the internet. The use of security groups and "allowing port" is unfortunately not enough.
The following is written in the docs:
If you try to access the endpoint in a web browser, however, you might find that the request times out. To perform even basic GET requests, your computer must be able to connect to the VPC. This connection often takes the form of a VPN, managed network, or proxy server.
Some options to consider are:
Setup a bastion host in the VPC in its public subnet, and ssh tunnel connection from the ES to your local mac through the bastion host. This would be the easiest ad-hoc proxy solution mentioned in the docs.
Accessing the EC directly from the bastion host (e.g. remote desktop)
Setting up a proxy server to proxy all requests from the internet into the ES.
For creating and managing ES domain you can refer this documentation.
While creating ES domain in the Network configuration section, you can choose either VPC access or Public access. If you select public access, you can secure your domain with an access policy that only allows specific users or IP addresses to access the domain.
To know more about access policies, you can refer this SO answer.
So, if you create your ES domain outside VPC, in public access you can easily send request and get response through postman, without adding any Authorization.
The endpoint in the url, is the endpoint that is generated when you have created your ES domain.
To create an index
For adding data into the index
Get API to get the mapping of index created
Now, you can check it from your AWS console, that this index is created in the ES domain
Related
I am attempting to setup MWAA in AWS and the UI web server needs to be inside a private subnet. Based on documentation the way to setup access to the web server VPC endpoints requires using a VPN/Bastion/Load Balancer and I would ideally like to use the load balancer to grant users access.
I am able to see the VPC endpoint created and it is associated to an IP in each subnet (two subnets total) that were chosen during the initial environment setup.
Target groups were setup to these IP addresses with HTTPS:443.
An internal ALB was created with the target groups above.
The UI is presented in the MWAA console as a pop-out link. When accessing that I am sent sent to a page that says The site can't be reached and the URL has a syntax similar to
https://####-vpce.c71.us-east-1.airflow.amazonaws.com/aws_mwaa/aws-console-sso?login=true<MWAA_WEB_TOKEN>
If I replace the beginning of the URL with below I am able to get to the proper MWAA webpage but there are some HTTPS certificate issues which I can figure out later but this seems to be the proper landing page I need to reach.
https://<INTERNAL_ALB_A_RECORD>/aws_mwaa/aws-console-sso?login=true<MWAA_WEB_TOKEN>
If I access just the internal ALB A record in my browser
https://<INTERNAL_ALB_A_RECORD>
I get redirected to a login page for MWAA, click the login button, then I get re-directed to the below which has the This site can't be reached page.
https://####-vpce.c71.us-east-1.airflow.amazonaws.com/aws_mwaa/aws-console-sso?login=true<MWAA_WEB_TOKEN>
I am not sure exactly where the issue is but it seems to be that I am not being re-directed to where I need to go.
Should I try a NLB pointing to the ALB as a target group? Additionally when accessing an internal ALB I read that you need access to the VPC. What does this mean exactly?
Still unsure of what the root cause is for the re-direction not taking place.
Our networking does not need an ALB or NLB in this scenario since we have a DirectConnect setup established which allows us to resolve VPC endpoints if we are on our corporate VPN. I still do end up at a page with an error message regarding This site can't be reached and to get to the proper landing page I just have to hit Enter again in my browser's URL search bar.
If anyone else comes across this make sure that you have the login token appended to the end of your URL entry before hitting enter again.
I have a case open with AWS on this so I will update if they are able to figure out the root cause.
I'm getting a 403 forbidden response when using fetch from a serverless Cloudflare Worker to my own dotnetcore api hosted on AWS EC2 instance. Both GET and POST. example worker code (also tested with init headers like user agent, accept, etc but same result):
fetch('http://54.xxx.xxx.xxx/test')
However that basic fetch to that api ip url returns 200 from local javascript and a simple hosted webpage. As well as postman and curl.
Also the Cloudflare worker can fetch other apis without issue.
fetch('http://jsonplaceholder.typicode.com/posts')
In the end I had to use the AWS DNS url instead.
fetch('http://ec2-54-xxx-xxx-xxx.us-west-1.compute.amazonaws.com/test')
This AWS elasticbeanstalk setup is as basic as possible. Single t3a.nano instance with default security group. I didn't see any documentation regarding the usage of IP vs DNS urls but they should resolve to the same IP. Also I don't see any options to deal with DNS issues on cloudflare side.
Nor any similar issues on stackoverflow.
So after a lot of pain, I'm just documenting the solution here.
Under the amazon instance summary you can find both "Public IPv4 address" and "Public IPv4 DNS".
From the Cloudflare worker the fetch with public dns works
fetch('http://ec2-54-xxx-xxx-xxx.us-west-1.compute.amazonaws.com/test')
and fetch with public ip returns status 403 with statusText "Forbidden"
fetch('http://54.xxx.xxx.xxx/test')
Cloudflare Workers can make outbound HTTP requests, but only to domain names. It is not possible for a Worker to make a fetch request to an IP address.
I can't confirm the exact behavior you observed (your post was 9 months ago, and a number of things have changed with Cloudflare Workers since then), in the last month or so I've observed that calling fetch() on an IP address results in the worker seeing "Error 1003 Access Denied" as the fetch response.
There isn't much info, but here's what's available about Error 1003:
Error 1003 Access Denied: Direct IP Access Not Allowed
Common cause – A client or browser directly accesses a Cloudflare IP address.
Resolution – Browse to the website domain name in your URL instead of the Cloudflare IP address.
As you found, if you use a DNS name instead, fetch works fine.
I have built an AWS API Gateway API endpoint which will be hit by one of the machines of my company's network to POST data every regular interval. But, the office firewall blocks it when I try it from office network through Postman (but when I use mobile hotspot/other wifi, it works seamlessly due to no firewall challenge), so I have to give the range of IP addresses to be white-listed by the office network team to be able to hit the API endpoint.
Where to get the IPs? Will they be constant or changing? Since it is a long process of raising tickets for IPs to be white-listed by the network security team, is there a smooth process on the same?
Also, is there a risk associated to this way of data push from on-prem to cloud? I've already implemented AWS IAM Authorization and also API-Key for security and access control. If there still is a risk, how to make this process totally secured?
Please help!
Unluckly you can not give a static IP to an API Gateway as it can change without notice, that is by AWS Design. What you can do in such case is to have a reverse poxy with and elastic IP associated that will transparently route your http traffic to the API Gateway (Then you need a domain name and a certificate because you will not use the APIGateway name anymore)
Also, is there a risk associated to this way of data push from on-prem
to cloud? I've already implemented AWS IAM Authorization and also
API-Key for security and access control. If there still is a risk, how
to make this process totally secured?
There's nothing totally secured in any organization, but in order to secure in transit data, you should use an encrypted channel like https (which is natively supported by API Gateway). That is why you need a domain name and a certificate for the proxy
how about:
https://www.reddit.com/r/aws/comments/8f1pve/whitelisting_aws_api_gateway/
API Gateway is proxied through Cloudfront so you could whitelist the IPs here that are for the CLOUDFRONT service. The ips are rotated so you’ll need to update your whitelist every
There is an SNS topic that you can subscribe to that sends out the IP ranges of AWS services whenever they are updated.
https://aws.amazon.com/blogs/security/how-to-automatically-update-your-security-groups-for-amazon-cloudfront-and-aws-waf-by-using-aws-lambda/
https://aws.amazon.com/blogs/aws/subscribe-to-aws-public-ip-address-changes-via-amazon-sns/
Unless it's a regional APIGW endpoint.
This explains the IPs of AWS https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
You need the ranges for the "EC2" service for the region where your API Gateway is deployed.
To me it was not clear enough to know which of the IP ranges in the JSON are those of API Gateway endpoints. I made a few experiments with nslookup and found out that you need the ranges for the "EC2" service using a script like this one:
const IPCIDR = require("ip-cidr");
const ipRanges = require('./ips.json');
const relevantRanges = ipRanges.prefixes.filter(el => {
const cidr = new IPCIDR(el.ip_prefix);
// Those IPs I get from `nslookup` to my endpoint. They belong to EC2 of the region
return cidr.contains('3.5.136.1') || cidr.contains('18.192.0.1');
});
console.log(JSON.stringify(relevantRanges, undefined, 2));
You could use it for experiments to find out what service and ranges are responsible for an endpoint.
I am trying to set Listener rules on an ALB. I want to add Google Oauth support to one of my servers.
Here are the Google endpoints I am using
I see google auth page alright, but on the callback url I'm seeing 500 Internal Server Error. I've also set the callback URL. Am at a loss as to what's wrong here. Any help is most appreciated!
After authentication, I'm not redirecting to my application, instead I've set ALP to show a text based simple response.
I struggled with the same problem for hours, and in the end it turned out to be the user info endpoint that was wrong. I was using the same one as you, but it should be https://openidconnect.googleapis.com/v1/userinfo.
I haven’t found any Google documentation saying what the value should be, but found this excellent blog post that contained a working example: https://cloudonaut.io/how-to-secure-your-devops-tools-with-alb-authentication/ (the first example uses Cognito, but the second uses OIDC and Google directly).
From AWS documentation
HTTP 500: Internal Server Error
Possible causes:
You configured an AWS WAF web access control list (web ACL) and there was an error executing the web ACL rules.
You configured a listener rule to authenticate users, but one of the following is true:
The load balancer is unable to communicate with the IdP token endpoint or the IdP user info endpoint. Verify that the security groups for your load balancer and the network ACLs for your VPC allow outbound access to these endpoints. Verify that your VPC has internet access. If you have an internal-facing load balancer, use a NAT gateway to enable internet access.
The size of the claims returned by the IdP exceeded the maximum size supported by the load balancer.
A client submitted an HTTP/1.0 request without a host header, and the load balancer was unable to generate a redirect URL.
A client submitted a request without an HTTP protocol, and the load balancer was unable to generate a redirect URL.
The requested scope doesn't return an ID token.
I'm sure this is a fairly simple question regarding EC2 and S3 on AWS.
I have a static website hosted on S3 which connects to a MongoDB server on an EC2 instance which I want to secure. Currently it's open to all of the internet 0.0.0.0/0 on port 27017, which is the MDB default. I want to restrict the inbound traffic to only requests from the S3 static web site however for security reasons. Apparently S3 does not supply fixed addresses which is causing a problem.
My only thought was to open the port to all IP ranges for the S3 region I am in. This doc on AWS explains how to find these. Although they are subject to change without notice.
http://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
Would this be the way to proceed or am I missing something obvious here. Another way to assign an IP to S3 perhaps?
S3 is a storage service, not a compute service so it cannot make a request to your MongoDB. When S3 serves static webpages, your browser will render it and when a user clicks on a link to connect to your MongoDB, the request goes to MongoDB from the user's computer.
So MongoDB sees the request coming from the user's IP. Since you do not know where the user is coming from (or the IP range), you have no choice but to listen to traffic from any IP.
I think it is not possible to allow only for your s3 hosted site to access your DB inside the ec2 since s3 does not offer an IP address for you.
So its better to try an alternative solution such as instead of directly access DB, proxy through a https service inside your ec2 and restrict the inbound traffic for your mondo db port
s3 wont request your mongodb server on ec2 instance . From my understanding your js files in browser would request the mongodb running on ec2 instance . In that case you have to add message headers in the mongodb configuration files to allow CORS .
CORS: enter link description here