CloudFront serves old content for Domain - amazon-web-services

A week ago, I decided to implement a cloudfront dirstribution for a static web hosting s3 bucket when setting up a CI/CD pipeline with Buddy. When changing this, I made sure to change my A and AAAA records in route 53 for my site, eckmantek.com, from their default to my distribution, which is d22tb0q1u7k32l.cloudfront.net. With this set up, I began using my pipeline, but I found that the domain kept serving my old version of the site, before I implemented cloudfront.
At first I thought it was just a matter of waiting for the dns to refresh, but it's been a week now.
When I manually check my s3 bucket, I see the latest build there, and if I navigate to the distribution directly, I can see the updated site as well. It's almost like the domain is being routed to a ghost bucket, like a hidden cache or something. Any help is welcome!

Not really a programming problem, but I'll give it a shot. My DNS lookup tells me that both your main domain eckmantek.com as well as d22tb0q1u7k32l.cloudfront.net return the same A-records, which is a good sign.
$ dig A eckmantek.com
; <<>> DiG 9.16.1-Ubuntu <<>> A eckmantek.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 3984
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;eckmantek.com. IN A
;; ANSWER SECTION:
eckmantek.com. 60 IN A 18.66.2.121
eckmantek.com. 60 IN A 18.66.2.15
eckmantek.com. 60 IN A 18.66.2.17
eckmantek.com. 60 IN A 18.66.2.8
$ dig A d22tb0q1u7k32l.cloudfront.net
; <<>> DiG 9.16.1-Ubuntu <<>> A d22tb0q1u7k32l.cloudfront.net
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23164
;; flags: qr rd ra; QUERY: 1, ANSWER: 4, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;d22tb0q1u7k32l.cloudfront.net. IN A
;; ANSWER SECTION:
d22tb0q1u7k32l.cloudfront.net. 60 IN A 18.66.2.121
d22tb0q1u7k32l.cloudfront.net. 60 IN A 18.66.2.15
d22tb0q1u7k32l.cloudfront.net. 60 IN A 18.66.2.8
d22tb0q1u7k32l.cloudfront.net. 60 IN A 18.66.2.17
That means traffic gets to CloudFront. Assuming that you've configured CloudFront correctly to use the S3 bucket as the origin, the problem may be that CloudFront caches the old site and the TTL is too long.
You can explicitly clear the cache in CloudFront using invalidations, but you have to pay for that.
Once you have done that, make sure to set a reasonable TTL.
If we check out the headers from your website, we can also see that caching seems to be disabled and the content hasn't been updated in a while:
$ curl -sD - https://eckmantek.com -o /dev/null
HTTP/2 200
content-type: text/html
content-length: 2051
last-modified: Fri, 22 Nov 2019 11:17:48 GMT
accept-ranges: bytes
server: AmazonS3
via: 1.1 a06e85a5c7853d2f85565a048a9d2609.cloudfront.net (CloudFront), 1.1 99d54fc6a14abf3079ffadd5aa7c99de.cloudfront.net (CloudFront)
x-amz-cf-pop: YTO50-C3
date: Wed, 17 Nov 2021 11:19:18 GMT
cache-control: public, must-revalidate, max-age=0
etag: "e8179ec55581b2082badf57315b368b8"
vary: Accept-Encoding
x-cache: RefreshHit from cloudfront
x-amz-cf-pop: TXL50-P1
x-amz-cf-id: ZlWuOHfa-TjS1tzL74oXFnRHaCfyPm118OhO5gOm6pXFIfOI1LdDeQ==
It's served from S3, but maybe you should check if it's using the correct S3 bucket.
As an aside: You should use an ALIAS-record or a CNAME record to point your domain to CloudFront, because the CloudFront IPs may change.

Related

CloudFront still serves old image while S3 upload of new image was 5 days ago

I have a CloudFront distribution with orign S3. The Bucket (versioning disabled) contains images.
Behaviour of the connection between CloudFront and S3:
Redirect HTTP to HTTPS
Cached options:GET, HEAD (Cached by default) & OPTIONS
Cache Based on Selected Request Headers (None)
Use Origin Cache Headers
Min TTL: 0
Default TTL: 86400
Max TTL: 31536000
Forwad cookies: all
Query string forwarding: forward all based on cache
restrict viewer access, streaming, compress: no
My images in S3 have the following metadata (no cache control headers):
Content-Type image/jpeg
x-amz-meta-md5 lYw9zHZxxxxxxx8468A==
Now we have uploaded a new image in S3 around 5 days ago. When we open the image in S3 or download it we see the new image.
Now in CloudFront we are still seeing the old image while we were expecting a cache refresh after 24 hours.
By default, CloudFront caches a response from Amazon S3 for 24 hours
(Default TTL of 86,400 seconds).
When I curl the image 2 times:
HTTP/1.1 200 OK
Content-Type: image/jpeg
Content-Length: 12769
Connection: keep-alive
Date: Tue, 22 Oct 2019 08:57:57 GMT
Last-Modified: Thu, 18 Oct 2018 10:00:56 GMT
ETag: "0d581eef776ab0b6d44dd27c8759714a"
x-amz-meta-md5: DVge73dqxxxdJ8h1lxSg==
Accept-Ranges: bytes
Server: AmazonS3
X-Cache: Miss from cloudfront
HTTP/1.1 200 OK
Content-Type: image/jpeg
Content-Length: 12769
Connection: keep-alive
Date: Tue, 22 Oct 2019 08:57:57 GMT
Last-Modified: Thu, 18 Oct 2018 10:00:56 GMT
ETag: "0d581eef776ab0b6d44dd27c8759714a"
x-amz-meta-md5: DVge73dqxxxdJ8h1lxSg==
Accept-Ranges: bytes
Server: AmazonS3
X-Cache: Hit from cloudfront
First a miss, then a hit, but the last modified date is still too long ago and the new image is not retrieved from S3. I know I can create an invalidation but I don't want to make new invalidations every time we have new images available.
What could be the issue here? if you need more info, please ask!

Name Server is not getting updated

I have changed name server of mypleaks.com like 24 hours back but it is still not updated.
It's still giving me below name servers which is old:-
Name Server: ns-****.awsdns-**.org
Name Server: ns-***.awsdns-**.com
Name Server: ns-****.awsdns-**.co.uk
Name Server: ns-***.awsdns-**.net
then tested it and got this result on zonemaster
where it's giving three Errors like below:-
No common nameserver IP addresses between child.
Parent has nameserver(s) not listed at the child
None of the nameservers listed at the parent are listed at the child.
Your question is offtopic as not related to programming.
You are in a lame delegation case, the list of nameservers do not match at the registry and in your own.
If you query the parent zone, you get now:
$ dig #a.gtld-servers.net mypleaks.com NS +noall +nodnssec +auth
; <<>> DiG 9.11.3-1ubuntu1.1-Ubuntu <<>> #a.gtld-servers.net mypleaks.com NS +noall +nodnssec +auth
; (2 servers found)
;; global options: +cmd
mypleaks.com. 172800 IN NS ns-107.awsdns-13.com.
mypleaks.com. 172800 IN NS ns-613.awsdns-12.net.
mypleaks.com. 172800 IN NS ns-1069.awsdns-05.org.
mypleaks.com. 172800 IN NS ns-1710.awsdns-21.co.uk.
But if you query these nameservers they do not believe to be authoritative on your domain:
;; ANSWER SECTION:
mypleaks.com. 300 IN NS ns1.digitalocean.com.
mypleaks.com. 300 IN NS ns2.digitalocean.com.
mypleaks.com. 300 IN NS ns3.digitalocean.com.
So you need to go back to Amazon and make the configuration needed so that all these 4 nameservers (or others as provided by Amazon) are indeed authoritative for your domain (in which case the second reply will be the same as the first, which is currently not the case).

bind(in aws) sub domain delegation from windows not resolving

I have an internal domain, say example.com, in Windows AD DNS. I have created a sub-domain delegation, aws.example.com, with a glue record pointing to a BIND 9.8 instance in AWS (over site-to-site VPN).
The BIND instance has a single zone configured as a forward only (with forwarder) pointing to the AWS VPC subnet resolver which has an AWS Rt. 53 zone (aws.example.com) associated.
The problem is resolution is not functioning correctly, sometimes.... from my internal network if I dig or nslookup against the Windows DNS for hosts in the Rt. 53 zone, i get no answer (although I do see the query hitting BIND). If I then dig/nslookup against the BIND instance directly it works.
Now if I go back to the first step, dig/nslookup against Windows DNS, I do get successful resolution.
It's as if the initial dig/nslookup, which is coming via Windows DNS, isn't triggering the forward only behavior and the direct query is & then caching the answer.
Can anyone provide insight into what I've done wrong or how to change this behavior?
BIND config:
acl goodclients {
172.31.0.0/16;
192.168.0.0/16;
localhost;
localnets;
};
options {
directory "/var/cache/bind";
recursion yes;
allow-query { goodclients; };
forwarders {
172.31.0.2;
};
#forward only;
dnssec-enable yes;
dnssec-validation yes;
auth-nxdomain no; # conform to RFC1035
listen-on-v6 { any; };
querylog yes;
};
zone "aws.example.com" {
type forward;
forward only;
forwarders { 172.31.0.2; };
};
here's a sample of the fail-succeed-succeed sequence running queries to windows then bind then windows again from 2 different clients:
windows AD dns domain example.com
\_ subdomain aws.example.com —> NS 172.31.32.5 (bind instance in AWS )
\_ —> forwarding to:172.31.0.2 (aws VPC resolver IP) to Rt.53 associated zone
client 1:
user1#vfvps-server:~ #date
Wed Sep 14 14:18:41 EDT 2016
user1#vfvps-server:~ #nslookup
> lserver 192.168.4.147 <—————windows dns
Default server: 192.168.4.147
Address: 192.168.4.147#53
> server1.aws.example.com
Server: 192.168.4.147
Address: 192.168.4.147#53
** server can't find server1.aws.example.com: NXDOMAIN
> exit
client 2:
KWK-MAC:~ user1$ date
Wed Sep 14 14:19:29 EDT 2016
KWK-MAC:~ user1$ dig #172.31.32.5 server1.aws.example.com <—— 172.31.32.5 = bind
; <<>> DiG 9.8.3-P1 <<>> #172.31.32.5 server1.aws.example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23154
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 13, ADDITIONAL: 0
;; QUESTION SECTION:
;server1.aws.example.com. IN A
;; ANSWER SECTION:
server1.aws.example.com. 300 IN A 172.31.14.41
client 1:
user1#vfvps-server:~ #date
Wed Sep 14 14:19:40 EDT 2016
user1#vfvps-server:~ #nslookup
> lserver 192.168.4.147
Default server: 192.168.4.147
Address: 192.168.4.147#53
> server1.aws.example.com
Server: 192.168.4.147
Address: 192.168.4.147#53
Non-authoritative answer:
Name: server1.aws.example.com
Address: 172.31.14.41
Windows DNS server configured with subdomain delegation will send iterative query to your BIND server. BIND will respond only if it is authoritative or from cache.
(you can try dig +norecurse server1.aws.example.com #172.31.32.5 and it will fail)
In your Windows DNS, you need to configure "Conditional Forwarder" for aws.example.com.

Failure: DNS resolution failed: DNS response error code NXDOMAIN on AWS Route53

I have a site hosted on AWS and recently the site went down with NXDOMAIN error. The site was working before and the issue doesn't appear to be with the site as the Elastic Beanstalk direct link (xxxx-prod.elasticbeanstalk.com) is working fine.
In my Route53 I have a CNAME linking to my (xxxx-prod.elasticbeanstalk.com) and a SOA and 4 NS records supplied by AWS. xxxx is a placeholder for the actual site name. Running dig...
dig xxxx.com any
; <<>> DiG 9.8.3-P1 <<>> xxxx.com any
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 63003
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0
;; QUESTION SECTION:
;xxxx.com. IN ANY
;; AUTHORITY SECTION:
com. 895 IN SOA a.gtld-servers.net. nstld.verisign-grs.com. 1435723016 1800 900 604800 86400
;; Query time: 31 msec
;; SERVER: 64.71.255.204#53(64.71.255.204)
;; WHEN: Tue Jun 30 23:57:22 2015
;; MSG SIZE rcvd: 102
It looks like my NS records might be the issue but I am not sure. Can someone confirm.
TLDR: you need to contact your registar to figure out what's happening with the domains. You've left the domain in the question so I actually tried looking at what DNS was seeing for it.
Do you have an A record for your domain?
host vizibyl.com
Host vizibyl.com not found: 3(NXDOMAIN)
https://www.whois.net ->
Name Server: NS-1519.AWSDNS-61.ORG
Name Server: NS-1828.AWSDNS-36.CO.UK
Name Server: NS-228.AWSDNS-28.COM
Name Server: NS-544.AWSDNS-04.NET
Status: clientHold http://www.icann.org/epp#clientHold
Status: clientTransferProhibited http://www.icann.org/epp#clientTransferProhibited
http://www.icann.org/epp#clientHold
for clientHold:
This status code tells your domain's registry to not activate your domain in the DNS and as a consequence, it will not resolve. It is an uncommon status that is usually enacted during legal disputes, non-payment, or when your domain is subject to deletion.
Often, this status indicates an issue with your domain that needs resolution. If so, you should contact your registrar to resolve the issue. If your domain does not have any issues, but you need it to resolve, you must first contact your registrar and request that they remove this status code.

mod_security false positives

I`m getting lots of false positives [??]after just setting up mod_security. I'm running it in detection only so no issues yet but these filters will start blocking requests once I need it to go live.
Afraid I don't 100% understand what the significance of these filters are, I get 100s of them on nearly every domain & all the requests look legitimate.
Request Missing a User Agent Header
Request Missing an Accept Header
What is the best thing to do here? Should I disable these filters? Can I set the severity lower so that requests won't be blocked?
Here is a complete entry
[22/Nov/2011:21:32:37 --0500] u6t6IX8AAAEAAHSiwYMAAAAG 72.47.232.216 38543 72.47.232.216 80
--5fcb9215-B--
GET /Assets/XHTML/mainMenu.html HTTP/1.0
Host: www.domain.com
Content-type: text/html
Cookie: pdgcomm-babble=413300:451807c5d49b8f61024afdd94e57bdc3; __utma=100306584.1343043347.1321115981.1321478968.1321851203.4; __utmz=100306584.1321115981.1.1.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=XXXXXXXX%20clip%20ons
--5fcb9215-F--
HTTP/1.1 200 OK
Last-Modified: Wed, 23 Nov 2011 02:01:02 GMT
ETag: "21e2a7a-816d"
Accept-Ranges: bytes
Content-Length: 33133
Vary: Accept-Encoding
Connection: close
Content-Type: text/html
--5fcb9215-H--
Message: Operator EQ matched 0 at REQUEST_HEADERS. [file "/etc/httpd/modsecurity_crs/base_rules/modsecurity_crs_21_protocol_anomalies.conf"] [line "47"] [id "960015"] [rev "2.2.1"] [msg "Request Missing an Accept Header"] [severity "CRITICAL"] [tag "PROTOCOL_VIOLATION/MISSING_HEADER_ACCEPT"] [tag "WASCTC/WASC-21"] [tag "OWASP_TOP_10/A7"] [tag "PCI/6.5.10"]
Message: Operator EQ matched 0 at REQUEST_HEADERS. [file "/etc/httpd/modsecurity_crs/base_rules/modsecurity_crs_21_protocol_anomalies.conf"] [line "66"] [id "960009"] [rev "2.2.1"] [msg "Request Missing a User Agent Header"] [severity "NOTICE"] [tag "PROTOCOL_VIOLATION/MISSING_HEADER_UA"] [tag "WASCTC/WASC-21"] [tag "OWASP_TOP_10/A7"] [tag "PCI/6.5.10"]
Message: Warning. Operator LT matched 5 at TX:inbound_anomaly_score. [file "/etc/httpd/modsecurity_crs/base_rules/modsecurity_crs_60_correlation.conf"] [line "33"] [id "981203"] [msg "Inbound Anomaly Score (Total Inbound Score: 4, SQLi=5, XSS=): Request Missing a User Agent Header"]
Stopwatch: 1322015557122593 24656 (- - -)
Stopwatch2: 1322015557122593 24656; combined=23703, p1=214, p2=23251, p3=2, p4=67, p5=168, sr=88, sw=1, l=0, gc=0
Producer: ModSecurity for Apache/2.6.1 (http://www.modsecurity.org/); core ruleset/2.2.1.
Server: Apache/2.2.3 (CentOS)
If you look under Section H of the audit log entry you showed at the Producer line, you will see that you are using the OWASP ModSecurity Core Rule Set (CRS) v2.2.1. In this case, I suggest you review the documentation information on the project page -
https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project#tab=Documentation
Specifically, you should review these two blog posts that I did -
http://blog.spiderlabs.com/2010/11/advanced-topic-of-the-week-traditional-vs-anomaly-scoring-detection-modes.html
http://blog.spiderlabs.com/2011/08/modsecurity-advanced-topic-of-the-week-exception-handling.html
Blog post #1 is useful so that you understand which "mode of operation" you are using for the CRS. By looking at your audit log, it appears you are running in anomaly scoring mode. This is where the rules are doing detection but the blocking decision is being done separately by inspecting the overall anomaly score in the modsecurity_crs_49_inbound_blocking.conf file.
Blog post #2 is useful so that you can decided exactly how you want to handle these two rules. If you feel that these are not important to you - then I would suggest that you use the SecRuleRemoveById directive to disable these rules from your own modsecurity_crs_60_exceptions.conf file. The way that it stands now, these two alert are only generating an inbound anomaly score of 4 - which is below the default threshold of 5 set in the modsecurity_crs_10_config.conf file so it is not blocked.
Looking at your audit log example, while this request did generate alerts, the transaction was not blocked. If it was, the message data under Section H would have stated "Access denied...".
As for the purposed of these rules - they are meant to flag requests that are not generated from standard web browsers (IE, Chrome, Firefox, etc...) as all of these browsers will send both User-Agent and Accept requests headers per the HTTP RFC spec.
One last comment - I would suggest that you use the official OWASP ModSecurity CRS mail-list for these types of questions -
https://lists.owasp.org/mailman/listinfo/owasp-modsecurity-core-rule-set
You can also search the archives that for answers.
Cheers,
Ryan Barnett
ModSecurity Project Lead
OWASP ModSecurity CRS Project Lead
This aren't false positives. Your request headers lack User-Agent and Accept headers. Usually these are sent from scanner- or hack-tools.