AWS - Can I launch nodes under a DNS domain (Auto Scale Group)? - amazon-web-services

Use Case
I'm working on an application that uses Presto, and for Presto, I have to set up HTTPS traffic internally (for security compliance reasons).
For this, I preferably need the nodes' FQDN to be in the same domain. E.g. myhost1.mydomain.com, myhost2.mydomain.com.
My Question
AWS automatically gives a FQDN like ip-10-20-30-40.ec2.internal. So, my question is:
Is there a way I can have a new node automatically be created with a FQDN like myhost1.mydomain.com? I know I can create internal "hosted zones" and DNS records for my hosts pretty easily, but I can't figure out how to make that the default domain for a new host.
Also, just FYI, I'm doing this for an auto-scale group; but I suspect that's irrelevant.

When the Amazon EC2 starts, it can run a script passed in via User Data.
You could code this script to create a CNAME record in Amazon Route 53 that points to the IP address of the instance.
I'm not sure how you'd necessarily determine the number within the name, so you could just create a random name. Also, it might be tricky to remove the CNAME entry when the instance is terminated. One way to both assign and remove the record would be to use Amazon EC2 Auto Scaling Lifecycle Hooks, which allows code to be triggered outside of the instance itself. It's more complex but would be fully effective.

I'm not familiar with Presto, but here's a couple of ideas.
First, if you are using an AWS managed load balancer, you can enable HTTPS between it and the instance using self-signed cert: the load balancer will NOT validate the cert, so your connection will be secure.
If that's not what you need, take a look at DHCP option sets for your VPC - I believe you can set your own domain name, rather than use the default ec2.internal.

Related

Certbot certificate rate limit hit during automation

I have purchased some elastic ips from aws which are mapped against some sub-domains.
e.g elastic ip mapped against xyz.domain.com.
I have an algorithm which creates ec2 Instances as per load on our website.
After successful start of that instance i associate that elastic ip to new instance using api.
it initiates my service to generate certificate using certbot, which makes my new instance setting complete and now i can use it in my existing architecture.
When load again goes back to normal i remove those new instances.
My problem is when load is fluctuating i sometimes hit rate limits in certbot e.g. and unable to function properly because without ssl certificate my whole system seems to collapse.
So what can i do to solve this problem?
Fixed parameters are -
10 elastic ips. All the domains are subdomain of a main domain
which are already mapped to elastic ips.
If you really want to use certbot then you need to store these certificates and reuse them when you start a new instance. You can use a parameter store securestring for example for each elasic IP and when you spin up the instance it checks this parameter first. If there is no certificate or it expires soon then get a new cert and overwrite the stored one. With this solution, a new instance does not mean a new certificate.
But this setup feels wrong. You can use the Application Load Balancer that integrates with ACM and Route53 so you can move the HTTPS termination to a single service then don't care about how instances are starting/stopping in the background.

HTTPS on Fargate's public IP - is it possible?

I run a service on Fargate and my main objective is to keep the cost as low as possible. A minor downtime is not an issue which is helpful with the current approach. I have one instance of the task, running on Fargate (with spot provider). I have my domain under route53 and i'm using a lambda function for updating the A Record of www when a new container starts. Everything seems to be working fine. I need to enable HTTPS though and i'm stuck with this one - don't know if it's possible. I created a (free) certificate by AWS but i don't know how to make the service to listen on port 443 (allowed in SG). Using a Load Balancer is not an option as it will automatically increase the cost by ~15$.
Is this possible? Maybe i just need to modify the container (using apache)?
It's possible, but you will need to look into something like Let's Encrypt for an SSL certificate you can use directly inside the Fargate instance. ACM certificates cannot be used for that purpose.
Configure you webserver inside the container with cert and private key as normal to listen on 443 1. Container hosted on Fargate with public IP is not much different than an EC2 instance with public IP. You are already taking care of the update to A record if it changes.

Redirect old RDS traffic to the new RDS in AWS

Old database endpoint : old.cy336nc8sq5l.us-east-1.rds.amazonaws.com
New database endpoint : new.cy336nc8sq5l.us-east-1.rds.amazonaws.com
Above endpoints are automatically created by AWS, at the time of creation of RDS instance
I have tried setting up CNAME for old.cy336nc8sq5l.us-east-1.rds.amazonaws.com with value new.cy336nc8sq5l.us-east-1.rds.amazonaws.com but it did not worked.For this I have to create a new Hosted zone in route53 name cy336nc8sq5l.us-east-1.rds.amazonaws.com
However, If a setup a CNAME in other hosted zone for any url like abc.example.com with value new.cy336nc8sq5l.us-east-1.rds.amazonaws.com works like a charm. The old rds url has been used in multiple application I cannot take a risk to completely abandon, the best way is to use some kind of redirection.
In addition to it, any CNAME under the cy336nc8sq5l.us-east-1.rds.amazonaws.com Hosted zone is not working.
How can I fix this? Please also suggest what is the best practice for redirection rds traffic? I knew for the new DB endpoint, I will create a new custom CNAME and will use that going forward rather that just using the default one. All suggestions are welcome :)
You can't add any records for the domain cy336nc8sq5l.us-east-1.rds.amazonaws.com, because you don't control it, Generally you will be able to create any hosted zones like google.com etc but it won't get reflect unless you change the NS record and SOA records from the original DNS provider to point yours, you can't it with aws rds domains. you can confirm it by doing
dig +short -t ns cy336nc8sq5l.us-east-1.rds.amazonaws.com
If above results returns your NS records then you control that domain.
To have this kind of flexibility in future, i would suggest a way create a private zone like mydb.com and have A record like master.mydb.com with value old.cy336nc8sq5l.us-east-1.rds.amazonaws.com and when you want to switch to another endpoint just switch it in route53, after TTL expires the connections will start coming to new endpoint, since you are making a change, its better to start using this way.
Also for your case, after you switch to new endpoint, you can check the connections count in the old DB to know if its being referred somewhere and by running show processlist;, you will be able to know which IP, its being used.
The bottom line is that you are going to have to update all 30 applications to use the new DB endpoint. If you are going to be deleting databases & recreating them like this regularly, then configure your databases to use a name in a zone you control, and create a CNAME to whatever database endpoint is current.
You may be able to create a temporary solution by adding an entry to /etc/hosts (assuming your clients are running linux - I believe this is also possible on Windows, but it has been a long time) that maps the current IP for the new database to the old hostname. But this is probably just as much work as updating the application to use the new database. It will also fail if you are running a multi-AZ database and have a failover event.
Change your DB identifier can help in some way.
Select your cluster -> Modify -> change DB cluster identifier
You will keep your old database with difference endpoint, then change new DB to new endpoint.
But I love /etc/hosts solution as simple and safe.

OpenVPN and VPC peering - How to resolve .compute.internal domains in two different accounts with BIND9

At our company we have three AWS accounts, the main one, used as "root" account for IAM and hosting an OpenVPN Access Server. The other two accounts are pro and stg. Each one has its own VPC, with different IP ranges, and we have a VPC peering between the root and pro accounts, and other one between root and stg. IP routing is already setup and everything is under control from this side.
(I'm sorry I can't upload images yet, so here you have the link)
VPN+VPC-Peering
The problem comes with DNS resolution. The setup is this one:
I've installed BIND9 in the OpenVPN server, to allow DNS forwarding for private hosted domains, using a configuration like this one in named.conf.local
zone "stg-my-internal-domain.com" IN {
type forward;
forward only;
forwarders { 10.229.1.100;10.229.2.100; };
};
zone "pro-my-internal-domain.com" IN {
type forward;
forward only;
forwarders { 10.228.1.100;10.228.2.100; };
};
And also two Route53 inbound resolvers (a simple BIND server running on each VPC also works) running in 10.229.1.100 and 10.229.2.100 for stg and 10.228.1.100 10.228.2.100 for pro account
VPN clients have OpenVPN profiles that use the Access Server as DNS resolver.
From my client, I can resolve both my-service-1.pro-my-internal-domain.com and my-service-2.stg-my-internal-domain.com perfectly, but the problem comes when I want to resolve internal domain names like the ones that AWS generates inside each VPC with my-service-2.eu-west-1.compute.internal
I know that this is an anti-pattern and I should always use the private domain as much as I can, but for some cases like EMR clusters, YARN and Hadoop managers use links that reference to the internal AWS names, making the resolution impossible.
So my question is: Is there any way to configure DNS to delegate resolution to a secondary address if primary fails?
I could set up a forwarder for the eu-west-1.compute.internal zone using all the accounts resolvers, but
DNS specification says that the secondary nameserver will only be used if the first one is unreachable, so as far as it answers an empty or "unknown" response, it's still a valid response and the second one will not be queried.
Any help is really appreciated!
Why not just change the internal host name to a public dns name? Those services are using the hostname assigned to them of course. You can change it.
See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/set-hostname.html
You may (or may not) need to assign fixed private ips to each. In any case publish this private IP in a public DNS zone. You should then be able to resolve these names properly. Note you can also have a script run on each instance on startup, to update the hostname and dns record.
For a good discussion on private ip addresses in public DNS, see https://serverfault.com/questions/4458/private-ip-address-in-public-dns
For reference, here is the best answer there:
Some people will say no public DNS records should ever disclose private IP addresses....with the thinking being that you are giving potential attackers a leg up on some information that might be required to exploit private systems. Personally, I think that obfuscation is a poor form of security, especially when we are talking about IP addresses because in general they are easy to guess anyway, so I don't see this as a realistic security compromise. The bigger consideration here is making sure your public users don't pickup this DNS record as part of the normal public services of your hosted application. ie: External DNS lookups somehow start resolving to an address they can't get to. Aside from that, I see no fundamental reason why putting private address A records into the public space is a problem....especially when you have no alternate DNS server to host them on. If you do decide to put this record into the public DNS space, you might consider creating a separate zone on the same server to hold all the "private" records. This will make it clearer that they are intended to be private....however for just one A record, I probably wouldn't bother.
AWS only supports DNS resolution of these internal ipv4 DNS hostnames if your VPN is in the same region as your EMR cluster (or any other compute resource). I have reached out to their Support and they have confirmed this.
For example, I have an AWS Client VPN endpoint setup in Frankfurt and an EMR cluster in Ireland. I am pushing to my host the private DNS server of the VPC (and all other related config is enabled in both VPCs) so that I can resolve private Route53 DNS zone records.
While I am connected to the VPN,
I can't resolve this:
$ dig +short ip-10-11-x-x.eu-west-1.compute.internal
$
But I can resolve the following, which is an instance that's in the same region as the VPN endpoint:
$ dig +short ip-10-10-x-y.eu-central-1.compute.internal
10.10.x.y
How to solve this:
Either move your EMR clusters in the same region as your VPN is, or the other way around.
But the simplest solution might be to just use a Chrome plugin (here's an example) that automatically redirects ip-x-y-z... URLS to x.y.z IPs.

AWS RDS IP static or dynamic?

I have an RDS instance with a URL that was provided by Amazon. (This URL has an IP that's associated with (of course)).
To make connecting to the DB easier I made a redirect from my domain like this: "db.myDomain.com" to the IP of the DB Instance.
For a week it all worked fine, but then, suddenly, it stopped working. After searching for a few hours, I have realized that the IP I was redirecting to was not the same as the IP of the instance.
This made me think that maybe the IPs on RDS are dynamic and the only way to access the DB is with the URL provided by Amazon.
Is this correct? If so, is there away to redirect from one URL to another?
Yes, your observation about the dynamic nature of the IPs for RDS is correct and it is the anticipated behaviour of the Service. Always use the URL provided for RDS instance to access it RDS instance(s).
For most of the use cases, you don't to do a redirect to access; as the DNS name would go inside a config file / connection string. If you still need a friendly name - you may use the Route53 to create an alias. Here is a documentation link from AWS to accomplish that [ https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-rds-db.html ] - it is easier & convenient.
For RDS instance, the DNS name is not changed, but IP address will be changed in some case, especially when you enable Multi-AZ (multiple available zone), the RDS instance will be switched to other available zone with different IP address when AWS found any fails in it.
So in your application, you can't fix the IP address for your database accessing, always set DNS (domain name) to access your database.