I can't access a site that I just created on aws? - amazon-web-services

I'm getting a super weird issue whereby I just created a site on aws (s3, cf, route53) and now I'm getting This site can't be reached issue
it's weird for the following reasons:
I can access it on my phone
my friends can access it
I could initially access it but now cant
my internet connection is fine (hence posting on here) and accessing other sites
I've cleared my browser cache (but the issue occurs in firefox too??). I've tried incognito too
I restarted my laptop and it then briefly worked but after the 3rd refresh it died
I know this is super weird so not expecting it to be solved really but just wondering if anyone else has experienced this after launching a site and how I can fix it? really don't get why it's happening on my computer and noone elses (makes it hard to debug)

Try invalidation of the cache, it’s possible a specific cache is broken.

Related

Remove 400 error (cookies) caused by and update of the Consent Management Platform

The site of the company I work at is using a Consent Management Platform which was functioning ok. Lately, we had to make some modifications in it and had to reimplement it. The implementation went ok, even the engineers who offer support for the CMP I'm using confirmed that everything I did was fine.
And now the problem: some users are still having the old cookie on their devices. So now when they are entering the site they receive a 400 error and can not access the site anymore. The fix would be so that every user manually deletes the cookie on their device but this is impossible to do as our visitors are not very technical and we can't reach all of them.
So, is there anyway to somehow make any kind of change/implementation, from our side, from the server-side, in order to refresh the users session and make their 400 error disappear without them having to do it manually?
I'm really in a pinch right now and am in need of real advice.

Selenium not working properly on remote server (AWS EC2)

I'm using Selenium with chrome driver to scrape facebook, insta and reddit. Everything's working perfectly on local server. However, when I deploy on AWS EC2 (using Elastic Beanstalk) it does not work properly. Reddit is being scraped without any problem but for facebook and instagram it throws: selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element.
Searching for the solution to this problem, I came to know that this error occurs when the page is not yet loaded completely and we try to look for elements of the page. (Note: this issue did not occur on my local machine.)
To resolve this problem I added waiting delay so that the page can load completely. I tried giving delay of up to 60 seconds but it is still failing to load the page. Note: this issue does not occur on page 1 e.g. on facebook: login goes successful, search query goes successful and the results are shown and then when a try is made to load a specific post error occurs (page fails to load) similarly on Instagram: login is successful but when a try is made to access the search box element error occurs.
No idea why the pages are not being loaded when accessing through AWS server but working fine on my local server. The fact that reddit is being scraped confirms that the issue is not with selenium or other environmental dependencies...
If someone has any idea about this problem, please share. Thanks.

Amazon Web Services "400 Access Restricted" - Cannot log in

I signed up to AWS 1 month ago, everything it was working pretty nice, but one week ago, suddenly, I am not able to sign in. I am getting the following message:
400 Access Restricted.
Your access is being restricted because you appear to be located in a country or region where we do not provide services. We apologize for any inconvenience.
I am not able to find one place inside aws site (or google) explaining this, and I cannot get support from them because firstly I have to log in... .
I think it happened because I cleaned all cookies and history in chrome (I am not sure it is happening from that moment). But, if this would be the case, it should work from Internet Explorer, or the cookies are shared by every navigator?.
How could I get access again?
Regards

Functioning domain even when the server is off

Something might be a wrong with my domain name. When i visit the url its still functioning right and shows the default webpage even after the server has been off for weeks now
You may have already tried this, but the two things I would check are:
You could be viewing a cached version of the page. This is the most obvious explanation but I'm guessing you cleared your browser cache.
You might be viewing a copy on a backup server. That'll depend on whether your NS records have an alternate server specified that isn't off.
Are you sure your server is off? What response do you get when you ping it via the domain name? As this is basic troubleshooting, you may have already tried these things, so if you did, what you encountered may be helpful in diagnosing the cause.

Django: 'CSRF verification failed' only happens on one computer!

I have a strange issue here with my Django app. I implemented the user auth/profiles, and I can log in successfully, etc, with various computers and from three different internet locations. It all works, except for this one computer.
This one computer receives this error when they log in.
CSRF verification failed. Request aborted.
No CSRF or session cookie.
I tried testing various browsers on this one computer, all get the same error. I even tested logging in on another computer from the same internet, and it works just fine. I believe this test reveals that it is not an internet problem, and it is a general computer setting problem (not specific browser).
I'm afraid that if this error happens with this one computer, if I go live, there may be other computers out there with the same issue. Is there anything I can do to check to see why it is happening only on this one machine, and more importantly, how I would fix it?
I'm hosting the app on some computer using the Django dev server.
Thanks a lot.
It could be that the error is misleading. When I have seen a problem with logins for applications that impact only one computer, but multiple browsers, it has usually been a problem with the date that computer is set to interacting with expiring cookies.
For example, this one computer may be set with a date 1 month into the future and the cookie being sent is being expired instantly because it is only a 90 minute session cookie.
So while it's not even really a Django related answer, check the clock on that computer. :-)