I am attempting to setup MWAA in AWS and the UI web server needs to be inside a private subnet. Based on documentation the way to setup access to the web server VPC endpoints requires using a VPN/Bastion/Load Balancer and I would ideally like to use the load balancer to grant users access.
I am able to see the VPC endpoint created and it is associated to an IP in each subnet (two subnets total) that were chosen during the initial environment setup.
Target groups were setup to these IP addresses with HTTPS:443.
An internal ALB was created with the target groups above.
The UI is presented in the MWAA console as a pop-out link. When accessing that I am sent sent to a page that says The site can't be reached and the URL has a syntax similar to
https://####-vpce.c71.us-east-1.airflow.amazonaws.com/aws_mwaa/aws-console-sso?login=true<MWAA_WEB_TOKEN>
If I replace the beginning of the URL with below I am able to get to the proper MWAA webpage but there are some HTTPS certificate issues which I can figure out later but this seems to be the proper landing page I need to reach.
https://<INTERNAL_ALB_A_RECORD>/aws_mwaa/aws-console-sso?login=true<MWAA_WEB_TOKEN>
If I access just the internal ALB A record in my browser
https://<INTERNAL_ALB_A_RECORD>
I get redirected to a login page for MWAA, click the login button, then I get re-directed to the below which has the This site can't be reached page.
https://####-vpce.c71.us-east-1.airflow.amazonaws.com/aws_mwaa/aws-console-sso?login=true<MWAA_WEB_TOKEN>
I am not sure exactly where the issue is but it seems to be that I am not being re-directed to where I need to go.
Should I try a NLB pointing to the ALB as a target group? Additionally when accessing an internal ALB I read that you need access to the VPC. What does this mean exactly?
Still unsure of what the root cause is for the re-direction not taking place.
Our networking does not need an ALB or NLB in this scenario since we have a DirectConnect setup established which allows us to resolve VPC endpoints if we are on our corporate VPN. I still do end up at a page with an error message regarding This site can't be reached and to get to the proper landing page I just have to hit Enter again in my browser's URL search bar.
If anyone else comes across this make sure that you have the login token appended to the end of your URL entry before hitting enter again.
I have a case open with AWS on this so I will update if they are able to figure out the root cause.
Related
Our current setup: corporate network is connected via VPN with AWS, Route53 entry is pointing to ELB which points to ECS service (both inside a private VPC subnet).
=> When you request the URL (from inside the corporate network) you see the web application. ✅
Now, what we want is, that when the ECS service is not running (maintenance, error, ...), we want to directly provide the users a maintenance page.
At the moment you will see the default AWS 503 error page. We want to provide a simple static HTML page with some maintenance information.
What we tried so far:
Using Route53 with Failover to CloudFront distributing an S3 bucket with the HTML
This does work, but:
the Route53 will not failover very fast => Until it switches to CloudFront, the users will still see the default AWS 503 page.
as this is a DNS failover and browsers (and proxies, local dns caches, ...) are caching once resolved entries, the users will still see the default AWS 503 page after Route53 switched, because of the caching. Only after the new IP address is resolved (may take some minutes or up until browser or os restart) will the user see the maintenance page.
as the two before, but the other way around: when the service is back running, the users will see the maintenance page way longer, than they should.
As this is not what we were looking for, we next tried:
Using CloudFront with two origins (our ELB and the failover S3 bucket) with a custom error page for 503.
This is not working, as CloudFront needs the origins to be publicly available and our ELB is in a private VPC subnet ❌
We could reconfigure our complete network environment to make it public and restrict the access to CloudFront IPs. While this will probably work, we see the following drawbacks:
The security is decreased: Someone else could setup a CloudFront distribution with our web application as the target and will have full access to it outside of our corporate network.
To overcome this security issue, we would have to implement a secure header (which will be sent from CloudFront to the application), which results in having security code inside our application => Why should our application handle that security? What if the code has a bug or anything?
Our current environment is already up and running. We would have to change a lot for just an error page which comes with reduced security overall!
Use a second ECS service (e.g. HAProxy, nginx, apache, ...) with our application as target and an errorfile for our maintenance page.
While this will work like expected, it also comes with some drawbacks:
The service is a single point of failure: When it is down, you can not access the web application. To overcome this, you have to put it behind an ELB, put it in at least two AZs and (optional) make it horizontally scalable to handle bigger request amounts.
The service will cost money! Maybe you only need one small instance with little memory and CPU, but it (probably) has to scale together with your web application when you have a lot of requests!
It feels like we are back in 2000s and not in a cloud environment.
So, long story short: Are there any other ways to implement a f*****g simple maintenance page while keeping our web application private and secure?
https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-use-postman-to-call-api.html
I am following above document, to connect to my AWS Elastic Search via Postman.
What I want to achieve sent request & get the response.
I put all things related Authentication as well, still it is giving timeout error.
It is giving error 'Could not get any response'.
My Postman settings related to SSL is also correct
Sample URL :
https://vpc-abc-yqb7jfwa6tw6ebwzphynyfvaka.ap-southeast-1.es.amazonaws.com/elasticsearch_index/_search?source={"query":{"bool":{"should":[{"multi_match":{"query":"abc","fields":["name.suggestion"],"fuzziness":1}}]}},"size":10,"_source":["name"],"highlight":{"fields":{"name.suggestion":{}},"pre_tags":["\u003Cem\u003E"],"post_tags":["\u003C\/em\u003E"]}}&source_content_type=application/json
Since your ES domain is in the VPC, you can't access if from the internet. The use of security groups and "allowing port" is unfortunately not enough.
The following is written in the docs:
If you try to access the endpoint in a web browser, however, you might find that the request times out. To perform even basic GET requests, your computer must be able to connect to the VPC. This connection often takes the form of a VPN, managed network, or proxy server.
Some options to consider are:
Setup a bastion host in the VPC in its public subnet, and ssh tunnel connection from the ES to your local mac through the bastion host. This would be the easiest ad-hoc proxy solution mentioned in the docs.
Accessing the EC directly from the bastion host (e.g. remote desktop)
Setting up a proxy server to proxy all requests from the internet into the ES.
For creating and managing ES domain you can refer this documentation.
While creating ES domain in the Network configuration section, you can choose either VPC access or Public access. If you select public access, you can secure your domain with an access policy that only allows specific users or IP addresses to access the domain.
To know more about access policies, you can refer this SO answer.
So, if you create your ES domain outside VPC, in public access you can easily send request and get response through postman, without adding any Authorization.
The endpoint in the url, is the endpoint that is generated when you have created your ES domain.
To create an index
For adding data into the index
Get API to get the mapping of index created
Now, you can check it from your AWS console, that this index is created in the ES domain
I'm using AWS S3 for front-end web hosting and AWS EC2 for back-end hosting.
The EC2 instance is behind an elb and has scheduled maintenance, and I want to display a maintenance page when the EC2 instance is under maintenance.
The way that I set it up is to let index.html "touch" some files on EC2, if the server is unavailable it will return HTTP 503 error. There is a 503.html in S3 and I want to display it when 503 error happens.
I've tried creating a new CloudFront Error Page and creating S3 Redirection rules, but none of them is working. What is the best way to set up the maintenance page?
I've been searching for a quick way to do this. We need to return a 503 error to the world during DB upgrade, but white list a few IPs of developers so they can test it before opening back up to public.
Found a one spot solution::
Go to the Loader Balancer in EC2 and select the load balancer you would like to target. Below, you should see Listeners. Click on a listener, and edit the rule. Create a rule like this:
Now everyone gets a pretty maintenance page returned with a 503 error code, and only two IP addresses in the first rule will be able to browse to the site. Order is important, where the two IP exceptions are on top, then it goes down the list. The last item is always there by default.
https://aws.amazon.com/about-aws/whats-new/2018/07/elastic-load-balancing-announces-support-for-redirects-and-fixed-responses-for-application-load-balancer/
Listener Rules for Your Application Load Balancer:
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/listener-update-rules.html
Note: There is a max limit of 1024 characters in the Fixed Response.
It sounds like you are looking for the origin failover functionality https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/high_availability_origin_failover.html which lets you failover to a second origin if requests to the first origin start failing. Alternatively, you could also configure route53 health checks and do the failover at DNS level https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/dns-failover.html
I am trying to set Listener rules on an ALB. I want to add Google Oauth support to one of my servers.
Here are the Google endpoints I am using
I see google auth page alright, but on the callback url I'm seeing 500 Internal Server Error. I've also set the callback URL. Am at a loss as to what's wrong here. Any help is most appreciated!
After authentication, I'm not redirecting to my application, instead I've set ALP to show a text based simple response.
I struggled with the same problem for hours, and in the end it turned out to be the user info endpoint that was wrong. I was using the same one as you, but it should be https://openidconnect.googleapis.com/v1/userinfo.
I haven’t found any Google documentation saying what the value should be, but found this excellent blog post that contained a working example: https://cloudonaut.io/how-to-secure-your-devops-tools-with-alb-authentication/ (the first example uses Cognito, but the second uses OIDC and Google directly).
From AWS documentation
HTTP 500: Internal Server Error
Possible causes:
You configured an AWS WAF web access control list (web ACL) and there was an error executing the web ACL rules.
You configured a listener rule to authenticate users, but one of the following is true:
The load balancer is unable to communicate with the IdP token endpoint or the IdP user info endpoint. Verify that the security groups for your load balancer and the network ACLs for your VPC allow outbound access to these endpoints. Verify that your VPC has internet access. If you have an internal-facing load balancer, use a NAT gateway to enable internet access.
The size of the claims returned by the IdP exceeded the maximum size supported by the load balancer.
A client submitted an HTTP/1.0 request without a host header, and the load balancer was unable to generate a redirect URL.
A client submitted a request without an HTTP protocol, and the load balancer was unable to generate a redirect URL.
The requested scope doesn't return an ID token.
I've just deployed my IdentityServer3 solution out to an AWS instance. I had IdentityServer3 configured to RequireSSL. The problem seems that it's relatively common to have SSL running up to the AWS load balancer, then run non-SSL between the LB and IIS. So, I set RequireSSL to false and tried again. Now the problem seems to be that when IdentityServer tries to redirect to the sign-in page, it does so without https. When this redirect exits and re-enters through the LB, it fails because it must be https coming into the LB. So, it seems that IdentityServer is constructing the redirect to the sign-in page with consideration of the RequireSSL flag. Is there anyway around this? I'm working with my ops guy to just run SSL all the way to IIS, but i'm getting some pushback (of course).
As the docs state here:
https://identityserver.github.io/Documentation/docsv2/advanced/deployment.html
If you want to terminate SSL on the load balancer, there are two relevant settings on the options:
RequireSsl
Set this to false to allow non-SSL connections between the load balancer and IdentityServer.
PublicOrigin
Since your internal farm nodes have different names than the public reachable address, IdentityServer can’t use it for link generation. Set this property to the public name.