Why is my DNS-Server unable to resolve my domain names? - amazon-web-services

I tried setting up a dns primary and secondary server for my cloudcomputing class, but nothing seems to work. The exact specifications of the servers are:
DNS
• Choose a FQDN for your network (doesn’t need to be officially registered)
• Implement a primary and a secondary DNS server hosting your domain. The secondary DNS server should mirror the configuration of the primary and should be able to automatically take over if the primary server goes down.
• All servers need to be included in the DNS zone file
• Reverse lookup should be available for all servers as well
• Your DNS servers should forward to external name service to resolve external requests (e.g., Google DNS)
After I set up both of the servers i tried to use the dig command, but it couldn't resolve my own domain names.
(dig clcteamvier.com)
But it did resolve for example "google.com" correctly.
So i created the server instances using aws and using security groups to ensure only my team could connect to the servers. The security groups are 100% set up correctly i even checked with my teacher. So here are the following files that i created and added to both of the dns servers:
so this is my named.conf:
acl "trusted" {
10.0.0.10; # ns1 - can be set to localhost
10.0.0.11; # ns2
10.0.0.13; # ldap
10.0.0.12; # gitlab - still has to change ip addr. back
};
options {
listen-on port 53 { 127.0.0.1; 10.0.0.10; };
# listen-on-v6 port 53 { ::1; };
allow-transfer { 10.0.0.11; }; # disable zone transfers by default
allow-query { trusted; }; # allows queries from "trusted" clients
};
include "/etc/bind/named.conf.local";
this is my named.conf.local:
zone "clcteamvier.com" {
type master;
file "/etc/named/zones/db.clcteamvier.com";
};
zone "0.0.10.in-addr.arpa" {
type master;
file "/etc/named/zones/db.10.0";
};
this is my /etc/named/zones/db.clcteamvier.com
$TTL 604800
# IN SOA dnsprim.clcteamvier.com. admin.clcteamvier.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
; name servers - NS records
IN NS dnsprim.clcteamvier.com.
IN NS dnssec.clcteamvier.com.
; name servers - A records
dnsprim.clcteamvier.com. IN A 10.0.0.10
dnssec.clcteamvier.com. IN A 10.0.0.11
; 10.0.0.0/23 - A records
ldap.clcteamvier.com. IN A 10.0.0.13
gitlab.clcteamvier.com. IN A 10.0.0.12
this is my /etc/named/zones/db.10.0:
$TTL 604800
# IN SOA clcteamvier.com. admin.clcteamvier.com. (
3 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
; name servers
IN NS dnsprim.clcteamvier.com.
IN NS dnssec.clcteamvier.com.
; PTR Records
10 IN PTR dnsprim.clcteamvier.com. ; 10.0.0.10
11 IN PTR dnssec.clcteamvier.com. ; 10.0.0.11
13 IN PTR ldap.clcteamvier.com. ; 10.0.0.13
12 IN PTR gitlab.clcteamvier.com. ; 10.0.0.12
i did those commands to check if there are any errors:
sudo named-checkzone clcteamvier.com /etc/named/zones/db.clcteamvier.com
sudo named-checkzone 0.0.10.in-addr.arpa /etc/named/zones/db.10.0
this is my secondary dns:
this is my named.conf:
acl "trusted" {
10.0.0.10; # ns1 - can be set to localhost
10.0.0.11; # ns2
10.0.0.13; # ldap
10.0.0.12; # gitlab
};
options {
listen-on port 53 { 127.0.0.1; 10.0.0.11; };
# listen-on-v6 port 53 { ::1; };
allow-query { trusted; }; # allows queries from "trusted" clients
include "/etc/named/named.conf.local";
this is my named.conf.local
zone "clcteamvier.com" {
type slave;
file "slaves/db.clcteamvier.com";
masters { 10.0.0.10; }; # ns1 private IP
};
zone "0.0.10.in-addr.arpa" {
type slave;
file "slaves/db.10.0";
masters { 10.0.0.10; }; # ns1 private IP
};

Related

Automatically certbot renew wildcard certificates on NameCheap - port 53 problem?

I'm trying to get an AWS/Lightsail Debian server automatically renewing certificates with certbot. My DNS is with Namecheap.
I'm follow the steps on https://blog.bryanroessler.com/2019-02-09-automatic-certbot-namecheap-acme-dns/ and https://blog.bryanroessler.com/2019-02-09-automatic-certbot-namecheap-acme-dns/. I keep getting a no-permission error.
I run:
sudo certbot certonly -d "*.example.com" --agree-tos --manual-public-ip-logging-ok --server https://acme-v02.api.letsencrypt.org/directory --preferred-challenges dns --manual --manual-auth-hook /etc/letsencrypt/acme-dns-auth.py --debug-challenges
I see:
Failed authorization procedure. example.com (dns-01): urn:ietf:params:acme:error:unauthorized :: The client lacks sufficient authorization :: No TXT record found at _acme-challenge.example.com
It says I need to open port 53. I followed Amazon's Lightsail instructions. Neither iptables nor ufw seems to be installed. When I nmap my machine, I don't see 53. I actually installed ufw for lack of a good idea, to no avail.
My /etc/acme-dns/config.cfg is as follows:
#/etc/acme-dns/config.cfg
[general]
# DNS interface
listen = ":53"
protocol = "udp"
# domain name to serve the requests off of
domain = "acme.example.com"
# zone name server
nsname = "ns1.acme.example.com"
# admin email address, where # is substituted with .
nsadmin = "example.example.com"
# predefined records served in addition to the TXT
records = [
"acme.example.com. A <public ip>",
"ns1.acme.example.com. A <public ip>",
"acme.example.com. NS ns1.acme.example.com.",
]
debug = false
[database]
engine = "sqlite3"
connection = "/var/lib/acme-dns/acme-dns.db"
[api]
api_domain = ""
ip = "127.0.0.1"
disable_registration = false
autocert_port = "80"
port = "8082"
tls = "none"
corsorigins = [
"*"
]
use_header = false
header_name = "X-Forwarded-For"
[logconfig]
loglevel = "debug"
logtype = "stdout"
logformat = "text"
For the listen value, I also tried 127.0.0.1:53 and :53
The settings portion of /etc/letsencrypt/acme-dns-auth.py:
# URL to acme-dns instance
ACMEDNS_URL = "http://127.0.0.1:8082"
# Path for acme-dns credential storage
STORAGE_PATH = "/etc/letsencrypt/acmedns.json"
# Whitelist for address ranges to allow the updates from
# Example: ALLOW_FROM = ["192.168.10.0/24", "::1/128"]
ALLOW_FROM = []
# Force re-registration. Overwrites the already existing acme-dns accounts.
FORCE_REGISTER = False
Thanks for any help you can provide.
If you don't wish to maintain your own acme DNS server, I built and use this script to automatically renew NameCheap wildcard certs with certbot. I hope it helps:
https://github.com/scribe777/letsencrypt-namecheap-dns-auth

AWS CDK setting a second listener + target ignores the target port

I have an ECS container which runs two endpoints on two different ports.
I configure a network load balancer infront of it to have two listeners, each with their own target group.
AWS CDK code for my stack is here (Note: I changed the construct in my example)
class MyStack(Stack):
def __init__(self, scope: Construct, construct_id: str, certificate: Certificate, vpc: Vpc, repository: Repository, subnets: SubnetSelection, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
cluster: Cluster = Cluster(self, "MyCluster", vpc=vpc, container_insights=True)
image: ContainerImage = ContainerImage.from_ecr_repository(repository=repository, tag="latest")
task_definition: FargateTaskDefinition = FargateTaskDefinition(
self, "MyTaskDefinition", cpu=512, memory_limit_mib=1024,
)
container: ContainerDefinition = task_definition.add_container(
"MyContainer", image=image, environment={}
)
# As you can see, here I add two port mappings on my container
container.add_port_mappings(PortMapping(container_port=9876, host_port=9876))
container.add_port_mappings(PortMapping(container_port=8000, host_port=8000))
load_balancer: NetworkLoadBalancer = NetworkLoadBalancer(
self, "MyNetworkLoadBalancer",
load_balancer_name="my-nlb",
vpc=vpc,
vpc_subnets=subnets,
internet_facing=False
)
security_group: SecurityGroup = SecurityGroup(
self, "MyFargateServiceSecurityGroup",
vpc=vpc,
allow_all_outbound=True,
description="My security group"
)
security_group.add_ingress_rule(
Peer.any_ipv4(), Port.tcp(9876), 'Allow a connection on port 9876 from anywhere'
)
security_group.add_ingress_rule(
Peer.any_ipv4(), Port.tcp(8000), "Allow a connection on port 8000 from anywhere"
)
service: FargateService = FargateService(
self, "MyFargateService",
cluster=cluster,
task_definition=task_definition,
desired_count=1,
health_check_grace_period=Duration.seconds(30),
vpc_subnets=subnets,
security_groups=[security_group]
)
# Listener 1 is open to incoming connections on port 9876
listener_9876: NetworkListener = load_balancer.add_listener(
"My9876Listener",
port=9876,
protocol=Protocol.TLS,
certificates=[ListenerCertificate(certificate.certificate_arn)],
ssl_policy=SslPolicy.TLS12_EXT
)
# Incoming connections on 9876 are redirected to the container on 9876
# A health check is done on 8000/health
listener_9876.add_targets(
"My9876TargetGroup", targets=[service], port=9876, protocol=Protocol.TCP,
health_check=HealthCheck(port="8000", protocol=Protocol.HTTP, enabled=True, path="/health")
)
# Listener 2 is open to incoming connections on port 443
listener_443: NetworkListener = load_balancer.add_listener(
"My443Listener",
port=443,
protocol=Protocol.TLS,
certificates=[ListenerCertificate(certificates.quickfix_certificate.certificate_arn)],
ssl_policy=SslPolicy.TLS12_EXT
)
# Incoming connections on 443 are redirected to the container on 8000
# A health check is done on 8000/health
listener_443.add_targets(
"My443TargetGroup", targets=[service], port=8000, protocol=Protocol.TCP,
health_check=HealthCheck(port="8000", protocol=Protocol.HTTP, enabled=True, path="/health")
)
Now I deploy this stack successfully, but the result is not what I expected
Two target groups directing traffic to my container, but both on port 9876.
I read in the documentation that it is possible to have a load balancer direct traffic to different ports via different target groups.
Am I doing something wrong? Or does AWS CDK not support this?
I double checked the synthesized cloudformation template. It properly generates two target groups, one with port 9876 and one with port 8000.
Hi you need create a target from service then add as a target to listener.
const target = service.loadBalancerTarget({
containerName: 'MyContainer',
containerPort: 8000
}));

Is that possible to create bulk aws ALBs using powershell script?

Is that possible to create bulk aws ALBs using powershell script?
If someone can provide Powershell script template, that would be great.
Absolutely, you can install AWS Tools for PowerShell. Check link below, there are examples there.
https://aws.amazon.com/powershell/
`# Create HTTP Listener
$HTTPListener = New-Object -TypeName ‘Amazon.ElasticLoadBalancing.Model.Listener’
$HTTPListener.Protocol = ‘http’
$HTTPListener.InstancePort = 80
$HTTPListener.LoadBalancerPort = 80
#Create HTTPS Listener
$HTTPSListener = New-Object -TypeName ‘Amazon.ElasticLoadBalancing.Model.Listener’
$HTTPSListener.Protocol = ‘http’
$HTTPSListener.InstancePort = 443
$HTTPSListener.LoadBalancerPort = 80
$HTTPSListener.SSLCertificateId = ‘YourSSL’
# Create Load Balancer
New-ELBLoadBalancer -LoadBalancerName ‘YourLoadBalancerName’ -Listeners
#($HTTPListener, $HTTPSListener) -SecurityGroups #($sgId) -Subnets #($sn1Id, $sn2Id)
-Scheme ‘internet-facing’
# Create Load Balancer
New-ELBLoadBalancer -LoadBalancerName ‘YourLoadBalancerName’ -Listeners
#($HTTPListener, $HTTPSListener) -SecurityGroups #(‘SecurityGroupId’) -Subnets
#(‘subnetId1’, ‘subnetId2’) -Scheme ‘internet-facing’
# Associate Instances with Load Balancer
Register-ELBInstanceWithLoadBalancer -LoadBalancerName ‘YourLoadBalancerName’ -
Instances #(‘instance1ID’, ‘instance2ID’)
# Create Application Cookie Stickiness Policy
New-ELBAppCookieStickinessPolicy -LoadBalancerName ‘YourLoadBalancerName’ -
PolicyName ‘SessionName’ -CookieName ‘CookieName’
# Set the Application Cookie Stickiness Policy to Load Balancer
Set-ELBLoadBalancerPolicyOfListener -LoadBalancerName ‘YourLoadBalancerName’ -
LoadBalancerPort 80 -PolicyNames ‘SessionName’`
This script is just for one elb...how to transform this scripts to create bulk elbs?
Also, where to mention AWS account credentials?

Cannot get Route53 domain to point to Elastic Beanstalk

This is what I did:
Registered Domain Name jthinkws.com (with fasthosts)
Created new trusted Zone in Route 53 for jthinkws.com
Added new record Set for jthinkws.com
Name :search.jthinkws.com
Type :CName
Value:jthinkws.elasticbeanstalk.com
Waited 12 hours for it to propogate
But still if I enter http://search.jthinkws.com in a web-browser it is not found
Have I done this right ?
* Update *
Just searched on whois and see name servers are still set to
Name Server: NS1.LIVEDNS.CO.UK
Name Server: NS2.LIVEDNS.CO.UK
Name Server: NS3.LIVEDNS.CO.UK
do I have to do something to get them changed to the Amazon ones ?
* Update *
Changes to nameservers made and propogated yet still search.jthinkws.com does not work, why would this be ?
You need to update the nameserver records on fasthost to match the ones gives from Route53.
When I run "dig jthinkws.com any" in my bash shell I get the following responds:
; <<>> DiG 9.8.3-P1 <<>> jthinkws.com any
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24700
;; flags: qr rd ra; QUERY: 1, ANSWER: 5, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;jthinkws.com. IN ANY
;; ANSWER SECTION:
jthinkws.com. 3599 IN SOA ns1.livedns.co.uk. admin.jthinkws.com. 1404204424 10800 3600 604800 3600
jthinkws.com. 3599 IN NS ns2.livedns.co.uk.
jthinkws.com. 3599 IN NS ns3.livedns.co.uk.
jthinkws.com. 3599 IN A 213.171.195.105
jthinkws.com. 3599 IN NS ns1.livedns.co.uk.
;; Query time: 315 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Wed Jul 2 14:58:42 2014
;; MSG SIZE rcvd: 155
The NS records must be AWS records for Route53 to work. Check out this getting started guide.
Yes you need use the Amazon Ones for name servers; That is the connection you would establish with your domain name and Route53.
After you create the hosted zone for jthinkws.com. Go to Record sets, there will be an entry for type NS. You will need to copy these name servers endpoints and update it in the DNS provider (fasthosts) for the domain jthinkws.com
-Santhosh

Issues with creating subdomain running on EC2

I have a static webpage, example.com, that is working fine and hosted on AWS S3 with Route53 connecting the A and NS record sets to my GoDaddy DNS.
I want to create sub.example.com that points to a dynamical page that will be hosted on my EC2 instance. I have my EC2 associated with an Elastic IP, whose public address is 12.12.12.12. I set up Route53 by creating a separate hosted zone for sub.example.com with 3 record sets:
An A record set named sub.example.com with the value 12.12.12.12.
An NS record set with values NS-1.org, NS-2.org,NS-3.org, and NS-4.org.
AWS seems to have generated an SOA record set, with values ns-1.org. awsdns-hostmaster.amazon.com. 1 0002 003 0000004 00005
All record sets are named sub.example.com. In my GoDaddy account, under DNS Zone File, I added the following:
A record set - name sub pointing to 12.12.12.12
4 NS records - name sub pointing to NS-1.org, NS-2.org,NS-3.org, and NS-4.org.
Am I missing something? My server is not running yet, I just want to verify that my DNS settings are ready first.
While dig example.com NS works, I tested sub.example.com with command dig sub.example.com NS and it failed:
[lucas#lucas-ThinkPad-W520]/home/lucas$ dig sub.example.com NS
; <<>> DiG 9.9.5-3-Ubuntu <<>> sub.example.com NS
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 44939
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1280
;; QUESTION SECTION:
;sub.example.com. IN NS
;; AUTHORITY SECTION:
example.com. 900 IN SOA ns-5.net. awsdns-hostmaster.amazon.com. 1 0002 003 0000004 00005
;; Query time: 79 msec
;; SERVER: 127.0.1.1#53(127.0.1.1)
;; WHEN: Thu May 15 13:07:40 PDT 2014
;; MSG SIZE rcvd: 128
Interestingly, in the AUTHORITY SECTION, the SOA points to ns-5.net, which is under my NS set for the example.com hosted zone, NOT my sub.example.com zone. Any suggestions?
I also queried WHOIS for sub.example.com:
Domain Name: EXAMPLE.COM
Registrar: GODADDY.COM, LLC
Whois Server: whois.godaddy.com
Referral URL: http://registrar.godaddy.com
Name Server: NS-5.ORG
Name Server: NS-6.ORG
Name Server: NS-7.ORG
Name Server: NS-8.ORG
Status: clientDeleteProhibited
Status: clientRenewProhibited
Status: clientTransferProhibited
Status: clientUpdateProhibited
Updated Date: 30-jun-2014
Creation Date: 30-jun-2013
Expiration Date: 30-jun-2015
It indicates that my NS records are pointing to the name servers for example.com and not sub.example.com.
Am I missing something, or am I doing too much?
You do not need NS records for sub.example.com. You only need NS records for your domain, example.com. The A record is enough for sub.example.com.