HTTPS on Fargate's public IP - is it possible? - amazon-web-services

I run a service on Fargate and my main objective is to keep the cost as low as possible. A minor downtime is not an issue which is helpful with the current approach. I have one instance of the task, running on Fargate (with spot provider). I have my domain under route53 and i'm using a lambda function for updating the A Record of www when a new container starts. Everything seems to be working fine. I need to enable HTTPS though and i'm stuck with this one - don't know if it's possible. I created a (free) certificate by AWS but i don't know how to make the service to listen on port 443 (allowed in SG). Using a Load Balancer is not an option as it will automatically increase the cost by ~15$.
Is this possible? Maybe i just need to modify the container (using apache)?

It's possible, but you will need to look into something like Let's Encrypt for an SSL certificate you can use directly inside the Fargate instance. ACM certificates cannot be used for that purpose.

Configure you webserver inside the container with cert and private key as normal to listen on 443 1. Container hosted on Fargate with public IP is not much different than an EC2 instance with public IP. You are already taking care of the update to A record if it changes.

Related

Connect to AWS EC2 Instance over HTTPS

I have a MERN application with the frontend hosted on Netlify. I currently have the backend hosted at onrender.com. However this is quite slow and so was looking for something with faster load rates.
I have set up an EC2 instance on AWS and it is much faster, but I am struggling to enable HTTPS traffic.
The current setup:
EC2 instance set up and backend running. (I have ran it locally over http and it works fine).
AWS: security groups enabled https
The issue is that when I try to connect over https, it does not work.
I have tried various things including the ACM certificates (I have a certificate for my domain), creating load balancers that would direct to my instance, but I don't seem to succeeding. Admittedly, I don't fully understand what exactly I need to do here.
The outcome I want is to simply interact with the backend, which is on an AWS ubuntu instance, from my frontend over https.
Any help would be greatly appreciated.
if you are going the Load Balancer way, should be fairly simple.
Yes, it is a good idea to use ACM to provision Certificate for you.
Make sure that the Security Groups are well configured
In your case the Load Balancer should accept traffic from port 80 and 443
The Instance security group should be open where you have configured the instance to listen, it depends on your impmenetation
In the target group make sure that you have configured the target port correctly (that is the ec2 open port, where recieves traffic), also make sure that the health check is configured correctly.
I attached a quick summary how this little architecture should look like

SSH beanstalk from terminal using DNS

I am running an app in AWS Beanstalk, I use jenkins to do automatic deploys, manage crons, ecc, jenkins connects to the EC2 behind Beanstalk using the public ip.
The problem arises when the instance scales, since the IP of the EC2 will be different, I have to manually update Jenkins every time.
One of the simplest options would be to open the port 22 in the loadbalancer, but since I am using the recommended application loadbalancer, it only allows me to open the port 80/443. I was wondering if there is a way to create a dns record in route 53, that will automatically point to the right IP every time it scales?
I would like to avoid changing load balancer, because there are at least 20 environments that will need to be reconfigured.
I tried to look but no-one seems to have this issue, so either I have the wrong architecture, or it is too easy to fix.

Certbot certificate rate limit hit during automation

I have purchased some elastic ips from aws which are mapped against some sub-domains.
e.g elastic ip mapped against xyz.domain.com.
I have an algorithm which creates ec2 Instances as per load on our website.
After successful start of that instance i associate that elastic ip to new instance using api.
it initiates my service to generate certificate using certbot, which makes my new instance setting complete and now i can use it in my existing architecture.
When load again goes back to normal i remove those new instances.
My problem is when load is fluctuating i sometimes hit rate limits in certbot e.g. and unable to function properly because without ssl certificate my whole system seems to collapse.
So what can i do to solve this problem?
Fixed parameters are -
10 elastic ips. All the domains are subdomain of a main domain
which are already mapped to elastic ips.
If you really want to use certbot then you need to store these certificates and reuse them when you start a new instance. You can use a parameter store securestring for example for each elasic IP and when you spin up the instance it checks this parameter first. If there is no certificate or it expires soon then get a new cert and overwrite the stored one. With this solution, a new instance does not mean a new certificate.
But this setup feels wrong. You can use the Application Load Balancer that integrates with ACM and Route53 so you can move the HTTPS termination to a single service then don't care about how instances are starting/stopping in the background.

Self hosted VPN with PiHole on AWS

I'm trying to create a setup where all of my (mobile and home) traffic is encrypted and ad-blocked. The idea is to use this setup:
wherein all of my traffic when using the VPN client on my phone or PC is routed through a custom OpenVPN setup running on a AWS EC2 instance. On its way out of the EC2 instance towards the public internet, I want to have a PiHole or equivalent DNS sinkhole filtering requests for blacklisted sites.
It's important that this is configured in such a way that I'm not allowing for a public/open DNS resolver - only traffic coming from through the OpenVPN (and therefore coming from an OpenVPN client that is using one of my keys) should be allowed.
Is this possible? Am I correctly understanding the functionality of all the parts?
How do I set this up? What concepts do I need to understand to make this work?
This tutorial seems like a good place to start. This is using lightsail not EC2, but if you aren't planning to scale this up much that might be simpler and cheaper.

AWS - Can I launch nodes under a DNS domain (Auto Scale Group)?

Use Case
I'm working on an application that uses Presto, and for Presto, I have to set up HTTPS traffic internally (for security compliance reasons).
For this, I preferably need the nodes' FQDN to be in the same domain. E.g. myhost1.mydomain.com, myhost2.mydomain.com.
My Question
AWS automatically gives a FQDN like ip-10-20-30-40.ec2.internal. So, my question is:
Is there a way I can have a new node automatically be created with a FQDN like myhost1.mydomain.com? I know I can create internal "hosted zones" and DNS records for my hosts pretty easily, but I can't figure out how to make that the default domain for a new host.
Also, just FYI, I'm doing this for an auto-scale group; but I suspect that's irrelevant.
When the Amazon EC2 starts, it can run a script passed in via User Data.
You could code this script to create a CNAME record in Amazon Route 53 that points to the IP address of the instance.
I'm not sure how you'd necessarily determine the number within the name, so you could just create a random name. Also, it might be tricky to remove the CNAME entry when the instance is terminated. One way to both assign and remove the record would be to use Amazon EC2 Auto Scaling Lifecycle Hooks, which allows code to be triggered outside of the instance itself. It's more complex but would be fully effective.
I'm not familiar with Presto, but here's a couple of ideas.
First, if you are using an AWS managed load balancer, you can enable HTTPS between it and the instance using self-signed cert: the load balancer will NOT validate the cert, so your connection will be secure.
If that's not what you need, take a look at DHCP option sets for your VPC - I believe you can set your own domain name, rather than use the default ec2.internal.