AWS ECS Containers and External DNS - amazon-web-services

We have AWS ECS instances.
We're using an external service (Twilio) that needs to reach a specific container:port.
And it's SSL, so it has to be a DNS name
Currently, our Upgrade scripts assigns each container an entry in Route53, and I can use a combination of nslookup and my external IP address to discover my name (and then set an env var) on bootup.
But if containers crash, my upgrade script won't have run, so updating Route 53 won't have happened.
Is this problem already solved in some way? At this point, I'm looking at 2 or 3 days to implement a solution.
I don't believe I can use Service Discovery, as SD uses the internal IP address and would be in foo.local, which isn't externally accessible.
At this point, I think I have to write a program that determines what my DNS name needs to be and updates Route 53. That seems simple, but I also have to add permissions to update Route 53 to the IAM user inside the container, and that sounds like a security problem. I'd write a different program to expire dead names.
Is there a better way? This doesn't seem like that unique a problem.

Isn't this the problem that ECS Services and their integration with AWS Load Balancers solve? If you have an ECS task that needs to run for a long time, and it needs to be accessible at a public address, then it needs to run in an ECS service that is configured to use a public load balancer.

Related

SSH beanstalk from terminal using DNS

I am running an app in AWS Beanstalk, I use jenkins to do automatic deploys, manage crons, ecc, jenkins connects to the EC2 behind Beanstalk using the public ip.
The problem arises when the instance scales, since the IP of the EC2 will be different, I have to manually update Jenkins every time.
One of the simplest options would be to open the port 22 in the loadbalancer, but since I am using the recommended application loadbalancer, it only allows me to open the port 80/443. I was wondering if there is a way to create a dns record in route 53, that will automatically point to the right IP every time it scales?
I would like to avoid changing load balancer, because there are at least 20 environments that will need to be reconfigured.
I tried to look but no-one seems to have this issue, so either I have the wrong architecture, or it is too easy to fix.

Amazon Internal DNS Not Working Inside ECS Docker Containers

I have containers running via a service in ECS that start up every day. Today, they can't access resources because DNS is failing to resolve names (specifically, an AWS internal DNS entry).
The docker host can resolve the name without issue. DNS settings in /etc/resolve.conf are the same in both the host and the container itself. I've tried running the container in both bridged and host network mode and neither worked (especially weird for host, given they are supposed to share a network stack which I would think would include DNS services).
Normally, I would think something is wrong with the DNS server configuration or DNS entry configuration but I don't have control over either or those things in this case (since the entry in question belongs to AWS).
Any ideas on how to fix this?
Please see:
My docker container has no internet
Hard-coding a DNS entry to the docker daemon.json worked for me. Not the ideal, but got me going.

HTTPS on Fargate's public IP - is it possible?

I run a service on Fargate and my main objective is to keep the cost as low as possible. A minor downtime is not an issue which is helpful with the current approach. I have one instance of the task, running on Fargate (with spot provider). I have my domain under route53 and i'm using a lambda function for updating the A Record of www when a new container starts. Everything seems to be working fine. I need to enable HTTPS though and i'm stuck with this one - don't know if it's possible. I created a (free) certificate by AWS but i don't know how to make the service to listen on port 443 (allowed in SG). Using a Load Balancer is not an option as it will automatically increase the cost by ~15$.
Is this possible? Maybe i just need to modify the container (using apache)?
It's possible, but you will need to look into something like Let's Encrypt for an SSL certificate you can use directly inside the Fargate instance. ACM certificates cannot be used for that purpose.
Configure you webserver inside the container with cert and private key as normal to listen on 443 1. Container hosted on Fargate with public IP is not much different than an EC2 instance with public IP. You are already taking care of the update to A record if it changes.

AWS - Can I launch nodes under a DNS domain (Auto Scale Group)?

Use Case
I'm working on an application that uses Presto, and for Presto, I have to set up HTTPS traffic internally (for security compliance reasons).
For this, I preferably need the nodes' FQDN to be in the same domain. E.g. myhost1.mydomain.com, myhost2.mydomain.com.
My Question
AWS automatically gives a FQDN like ip-10-20-30-40.ec2.internal. So, my question is:
Is there a way I can have a new node automatically be created with a FQDN like myhost1.mydomain.com? I know I can create internal "hosted zones" and DNS records for my hosts pretty easily, but I can't figure out how to make that the default domain for a new host.
Also, just FYI, I'm doing this for an auto-scale group; but I suspect that's irrelevant.
When the Amazon EC2 starts, it can run a script passed in via User Data.
You could code this script to create a CNAME record in Amazon Route 53 that points to the IP address of the instance.
I'm not sure how you'd necessarily determine the number within the name, so you could just create a random name. Also, it might be tricky to remove the CNAME entry when the instance is terminated. One way to both assign and remove the record would be to use Amazon EC2 Auto Scaling Lifecycle Hooks, which allows code to be triggered outside of the instance itself. It's more complex but would be fully effective.
I'm not familiar with Presto, but here's a couple of ideas.
First, if you are using an AWS managed load balancer, you can enable HTTPS between it and the instance using self-signed cert: the load balancer will NOT validate the cert, so your connection will be secure.
If that's not what you need, take a look at DHCP option sets for your VPC - I believe you can set your own domain name, rather than use the default ec2.internal.

Force DNS Redirect in AWS VPC for Public Hostname

I am trying to deploy a kubernetes cluster into an AWS environment which does not support Route53 queries from the generated hostname ($HostA). This environment requires an override of the Endpoint configuration to resolve all Route53 queries to $HostB. Note that I am in not control of either host, and they are both reachable on the public internet. The protokube docker image I am deploying is not aware of this; to make it aware, I would need to build the image and host it myself, something I do not wish to do if I can avoid it (as I would probably have to do this for every docker image I am deploying).
I am looking for a way to redirect all requests to $HostA without having to change any docker configuration. Ideally, I would like a way to override all requests to $HostA from within my VPC to go to $HostB. If this is not possible, I am in control of the EC2 userdata which starts up the EC2 instances which hosts the images. Thus, perhaps there is a way I can set /etc/host.alises in the EC2 host and force this to be used for all running containers (instead of the container's /etc/host). Again, please keep in mind that I need to be able to control this from the host instance and NOT by overriding the docker image's configuration.
Thank you!