We had set up a common ALB as a single point of entry to all systems, We wanted to drop traffic at ALB when the SSL cert was not renewed(expired) by the individual system teams.
We have a 3rd part DDOS services that have automatic cert updates and it reflects the cert as valid but not the ALB Cert as the cert was not updated.
Internet -> DDOS services -> ALB -> Systems
I wonder if there's a way to deny traffic if the SSL cert is expired?
You can use an AWS config rule for the AWS_CERTIFICATE_EXPIRATION_CHECK event that triggers an SNS topic when the cert has expired (0 days to expiration).
The SNS topic can trigger a lambda function which removes/alters the security group rule that allows traffic from the security group associated with the ALB.
Related
I would like to redirect users to a custom maintenance page at a different domain. My setup includes AWS load balancer and EC2's. If the EC2 behind the LB is not reachable, What rule I need to add at LB to check the status code and redirect to a maintenance page at different domain?
Route53 Failover with S3 is an option
I suggest that you can use Route53 to achieve this with failover approach and maybe using a static website hosted on Amazon S3 for cost optimization.
Here are the main ideas:
Create the Route 53 health check which is your main site health check. If the status is failed, it redirects your traffic to the failed-over endpoint.
Create a record set for your primary endpoint which points to your main site endpoint and (Your ALB DNS) with Failover routing policy.
Create the failover endpoint which can be a static site (S3) or your maintenance page domain.
References:
https://aws.amazon.com/premiumsupport/knowledge-center/fail-over-s3-r53/
Route53 Health-check with SNS & Lambda
You can use this feature as a standalone healh-check without affecting your domain setup as above. It will notify any status changes to a SNS topic and you can subscribe a Lambda function to help you updating your Load Balancer listener to redirect your traffic into another site.
Once setup properly, it creates an alarm for you to monitor your main site.
With Lambda function, you can use Boto3 (Python3) to update your Load Balancer based on 2 kind of events:
Unhealthy: route traffic to another domain
Healthy: route traffic to your target group
References:
https://aws.amazon.com/premiumsupport/knowledge-center/lambda-subscribe-sns-topic-same-account/
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elbv2.html#ElasticLoadBalancingv2.Client.modify_listener
I have ALB and internet gateway for Lambda.
For lambda https access works automatically.
For ALB I can't access https.
I am not still sure though,,
I need to use ACM for ALB?, and don't need to do anything for lambda gateway??
Am I correct? and why this difference happens?
Lambda https access, your client's https is validated against the Amazon domain, and thus uses the Amazon certificates.
ALB https access, your client's https is validated against the domain which you specify, and therefore you must provide a certificate for that; ACM is used to store your certificate.
we are planning to create vpcendpoint in us-west-2 for nlb in us-east-1, for that currently we created nlb in us-west-2, created target groups pointing to to us-east-1 nlb interface ips and create VPC endpoint to the nlb in us-west-2. This setup works fine.
However, looking for better alternatives.
The nlb in east targets alb in the same region.
However, looking for better alternatives.
Using Route 53 resolvers to route cross-region traffic may be a better, simpler alternative to using ELBs.
From AWS guide:
The challenge some customers have faced is that VPC endpoints can only be used to access resources in the same Region as the endpoint
One of the ways we can solve this problem is with Amazon Route 53 Resolver. Route 53 Resolver provides inbound and outbound DNS services in a VPC. It allows you to resolve domain names for AWS resources in the Region where the resolver endpoint is deployed. It also allows you to forward DNS requests to other DNS servers based on rules you define. To consistently apply VPC endpoint policies to all traffic, we use Route 53 Resolver to steer traffic to VPC endpoints in each Region.
In-built to AWS PrivateLink, cross-region requests are still not supported (read this for S3 and this for DynamoDB) so there's no configuration possible on that front.
So I have an EC2 instance that has a web server. In the security groups I allowed incoming traffic on 80 and 443 but removed all the outgoing traffic for security reasons.
My application uses AWS SNS and SMTP, and of course whenever it tries to connect to these services it fails since the outbound traffic is blocked. How can I restrict the outbound traffic to just these services without using a proxy? I tried to check VPC endpoints but didn't find SNS and SMTP in the list.
You will need to enable the ports that these services need to receive your requests. Most AWS services use a REST interface which requires HTTPS (443).
For SNS you will need to enable port 443 outbound.
For SMTP you will need to look up the ports that you configured. For SES this is usually ports 465 or 587.
Amazon publishes ip-ranges.json which contains a list of IP addresses for AWS. You can create a Lambda function to automatically update your security groups with these addresses.
I would not block all outbound ports. Instead I would control where the instance can connect to using security groups and ip-ranges.json. Then I would test that you can still install updates, etc. If your instance is Windows based, then you have another can of worms adding the Microsoft sites.
IMHO: Unless you really need this level of control and security and are prepared to spend a lot of time managing everything ...
AWS IP Address Ranges
Example project:
How to Automatically Update Your Security Groups
To add to John's answer,
last month AWS released a product called "AWS PrivateLink" which enables people to advertise services within a VPC much like S3 endpoints do today. AWS will be publishing AWS services the same way in the coming months, so this may only be a short-term problem for you.
More information can be found https://aws.amazon.com/about-aws/whats-new/2017/11/introducing-aws-privatelink-for-aws-services/
I'm managing a domain at AWS Route 53 and I have a service exposed as an api on 3 servers spreaded across 3 main zones: us, asia, eu.
I created a traffic policy to redirect clients based on latency to the appropriate zone(s).
So client comes in via api.example.com, enters this latency based policy and exits at the closest server. Which works...with one problem though. I don't know how to enable https so I can have my clients use https://api.example.com.
Any ideas?
SSL (HTTPS) is completely unrelated to all the Route53 stuff you talked about in your question. You need to install an SSL certificate on the server, or on the load balancer if you are using a load balancer. You can also install the SSL certificate at your CDN, if you are using one.
Route53 is a DNS service. Route53 does not manage the protocol of a service, and it does not manage encryption. Route53 (DNS) just allows a client to lookup an IP based on a hostname.