I am trying to upload zone file from godaddy to AWS, when I copy paste the zone file content to AWS and click upload, the following error appeared:
Error parsing zone file: Error in line 38: Invalid address: >>++PARKED1++<< (encountered after 1 correct records)
In line:
# 600 IN A >>++PARKED1++<<
It looks like your domain was 'parked' with GoDaddy at the time you tried to export you zone file. >>++PARKED1++<< is an internal variable which GoDaddy use in there DNS Db.
The actual record is an A record and you should just replace >>++PARKED1++<< with the external IP address of your hosting provider. (e.g 1.1.1.1)
After the change you should expect that line of the config file to read as;
# 600 IN A 1.1.1.1 (For example.)
The GoDaddy help page also says;
The exported data follows the BIND zone file format and RFC 1035. You
must manually edit the exported data before a BIND DNS server can use
it directly. These edits will differ based on the requirements of the
server to which you are uploading the exported file.
But sadly it does not provide any useful pointers to the reader as to what exactly needs to be changed...
If you are mapping to an elasticbeanstalk.com endpoint then you shouldn't use an IP address (as they may change) and instead change the record type to ALIAS and then add the name of your endpoint xxxx.elasticbeanstalk.com
I was stuck exactly here for a while, and I think I might have an answer.
In place of the -parked- / missing 'a record' value, use the IP of the current application with a temporary adress.
For example, the IP address of example.eu-north-1.elasticbeanstalk.com
If unknown this IP address can also be found at www.whatsmydns.net.
Just type in the temporary address (e.g. the EB url address above) and the IP will show.
I.e. this is the -A Record- to use in place of the word -Parked-... copy & paste.
A second update on this..
After a couple of days I learnt that method above did not work too well.
Essentially, the A - IPv4 address of my EB app kept changing every so often.
Instead I updated the A Record to ALIAS (by ticking Alias = Yes) then entered the address of my EB app. Example xxxxxx.elasticbeanstalk.com
So far this has worked..
Just remove that line and use import. After the import, you can add the alias to the IP address
Related
After transferring a domain from another registrar to AWS, I can't get it verified in the Certificate Manager. I created a hosted zone, the CNAME records created by the Certificate Manager are there, I tried with the DNS tester - the records seem good. However it still says "pending validation". I tried a few times, waited a couple of days and it doesn't seem it will work.
I'm totally out of ideas, any help?
DNS validation require 2 things to be setup correctly. Record Name and Record Value
Check if you're correctly setting these in Route53. Reference Doc here: https://docs.aws.amazon.com/acm/latest/userguide/dns-validation.html
Now 2 issues which are very common
In the Record Name part confirm that you're not adding your domain name in the value. _X is the only part you have to copy-paste. If you copy _X.YourDomain then 'YourDomain' part is duplicated
Record Value ends with . (a period / dot). Don't remove that period
You can verify the settings from https://mxtoolbox.com/ it has various configurations like A record, CNAME, DNS Validation, etc.
I have one question to define the dns records. In this url (https://cloud.google.com/dns/docs/records), I read
Note: Adding the # symbol in this field causes the record to fail.
This generates some doubts, until now whenever I had defined the records in Google Cloud DNS, instead of using # I left it empty (thus referring to $ ORIGIN)
This is so?
that is, for example
example.com. 300 IN TXT "v = spf1 xxxxxxxxxxxxxxxxxxxxxxxxx"
example.com. 300 IN MX 10 server.domain.com.
Thank you very much
I have a group of micro-services hosted on AWS, these services interact with each other through request/response using DNS name defined on Route 53 at which i created a new private zone named api.io and defined the DNSs for example WSG_KAFKA, in my code i have configure the DNS name with the zone name like WSG_KAFKA.api.io
Is there is any way to ignore the domain name api.io and use the DNS name directly
To use the hostname directly you need to edit your /etc/resolv.conf and add search api.io option, so your file may look like:
search api.io
nameserver 10.0.0.2
That will help to just search your hostname by just using WSG_KAFKA.
From the man resolv.conf:
search Search list for host-name lookup.
The search list is normally determined from the local domain
name; by default, it contains only the local domain name.
This may be changed by listing the desired domain search path
following the search keyword with spaces or tabs separating
the names. Resolver queries having fewer than ndots dots
(default is 1) in them will be attempted using each component
of the search path in turn until a match is found. For
environments with multiple subdomains please read options
ndots:n below to avoid man-in-the-middle attacks and
unnecessary traffic for the root-dns-servers. Note that this
process may be slow and will generate a lot of network traffic
if the servers for the listed domains are not local, and that
queries will time out if no server is available for one of the
domains.
The search list is currently limited to six domains with a
total of 256 characters.
I have two domain names in aws route53:
bar.org
mybar.org
i am trying to generate Letsencrypt certificate using ruby based hook of dns-01 challenge ( https://gist.github.com/joshgarnett/02920846fea35f738d3370fd991bb0e0)
I am generating certificate for the domain "mybar.org", so my domains.txt contains the name as:
mybar.org
when i try to run dehydrated -c i get the following error:
RRSet with DNS name _acme-challenge.mybar.org. is not permitted in zone bar.org.
why does it try to add RRSet in bar.org instead of mybar.org? How do i get it working?
ruby based dns hook linked in the question has a bug at the following line at find_hosted_zone function while finding the hosted zone index out of available Route53 zones.
index = hosted_zones.index { |zone| domain.end_with?(zone.name.chop)
}
index is derived based on the zone ends with the give domain name. Since my domain name "mybar.org" evaluates to true with "bar.org" (other available zone), it returns index of that zone. So this needs a PR to solve the issue.
in my case it worked fine while i modified the code as:
index = hosted_zones.index { |zone| zone.name.chop.end_with?(domain)
}
I am working with a team that is using S3 to host content and they moved from a single bucket for all brands to one bucket for each brand and now we are having trouble when linking to the content from within salesforce site.com page. When I copy the link from S3 as HTTPS, I get a >"Your connection is >not private, Attackers might be trying to steal your information from >spiritxpress.s3.varsity.s3.amazonaws.com (for example, passwords, messages, or credit cards)."
I have asked them to compare the settings from the one that is working, and I don't have access to dig into it myself, and we are pretty new to this as well so thought I would see if there were any known paths to walk down. The ID and Key have not changed and I can access the content via CyberDuck, it just is not loading when reached via a link.
Let me know if additional information is needed and I will provide as quickly as I can.
[EDIT] the bucket naming convention they are using is all lowercase and meets convention guidelines as well, but it seems strange to me they way it is structured as they have named the bucket "brandname.s3.companyname" and when copying the link it comes across as "https://brandname.s3.company.s3.amazonaws.com/directory/filename" where the other bucket was being rendered as "https://s3.amazonaws.com/bucketname/......
Whoever made this change has failed to account for the way wildcard certificates work in HTTPS.
Requests to S3 using HTTPS are greeted with a certificate identifying itself as "*.s3[-region].amazonaws.com" and in order for the browser to consider this to be valid when compared to the link you're hitting, there cannot be any dots in the part of the hostname that matches the * offered by the cert. Bucket names with dots are valid, but they cannot be used on the left side of "s3[-region].amazonaws.com" in the hostname unless you are willing and able to accept a certificate that is deemed invalid... they can only be used as the first element of the path.
The only way to make dotted bucket names and S3 native wildcard SSL to work together is the other format: https://s3[-region].amazonaws.com/example.dotted.bucket.name/....
If your bucket isn't in us-standard, you likely need to use the region in the hostname, so that the request goes to the correct endpoint, e.g. https://s3-us-west-2.amazonaws.com/example.dotted.bucket.name/path... for a bucket in us-west-2 (Oregon). Otherwise S3 may return an error telling you that you need to use a different endpoint (and the endpoint they provide in the error message will be valid, but probably not the one you're wanting for SSL).
This is a limitation on how SSL certificates work, not a limitation in S3.
Okay, it appears it did boil down to some permissions that were missed and we were able to get the file to display as expected. Other issues are present, but the present one is resolved so marking as answered.