I signed up to AWS 1 month ago, everything it was working pretty nice, but one week ago, suddenly, I am not able to sign in. I am getting the following message:
400 Access Restricted.
Your access is being restricted because you appear to be located in a country or region where we do not provide services. We apologize for any inconvenience.
I am not able to find one place inside aws site (or google) explaining this, and I cannot get support from them because firstly I have to log in... .
I think it happened because I cleaned all cookies and history in chrome (I am not sure it is happening from that moment). But, if this would be the case, it should work from Internet Explorer, or the cookies are shared by every navigator?.
How could I get access again?
Regards
Related
I'm getting a super weird issue whereby I just created a site on aws (s3, cf, route53) and now I'm getting This site can't be reached issue
it's weird for the following reasons:
I can access it on my phone
my friends can access it
I could initially access it but now cant
my internet connection is fine (hence posting on here) and accessing other sites
I've cleared my browser cache (but the issue occurs in firefox too??). I've tried incognito too
I restarted my laptop and it then briefly worked but after the 3rd refresh it died
I know this is super weird so not expecting it to be solved really but just wondering if anyone else has experienced this after launching a site and how I can fix it? really don't get why it's happening on my computer and noone elses (makes it hard to debug)
Try invalidation of the cache, it’s possible a specific cache is broken.
I have a project where I want to store docs in a Google Cloud Bucket. These docs should not be publicly accessible. Right now, I am able to place documents in the bucket, but I can't seem to figure out how to retrieve them while keeping the bucket secure. *If I open up access to "allUsers", the docs load fine. However, I want these docs to only be accessible if they are using the system.
I'm sure I'm not the first person to want to do this, but I can't seem to come up with an answer on Google.
I have hit dead ends for days now, so please help! *To be clear, I do not have any code to show. Thanks
The answer of #DanielOcando is right if "they are using the system" means that they are accessing directly GCP. For this answer, I'm assuming that "your system" is an application that you are developing or something similar.
For this approach, the safest method is to use signed URL's This will let your users access your documents without the need to have a Google account, you can also set an expire time for this URL's to control how much time the user can be using the documents.
Somewhat curious about how to make a website on AWS, yesterday I went following this document:
https://aws.amazon.com/getting-started/projects/host-static-website/?c_1
in order to get started with something simple.
I clicked the button Get Started with the Implementation Guide and found myself here:
https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html
It went pretty well, except that even today, I still can't acccess the site with the expected URL (http://example.com).
For the sake of simplicity I decided to leave alone http://www.example.com for the time being.
Since the Step 2.5: Test Your Endpoint and Redirect could be performed without any problem;
I suspect that something went wrong when performing Step 3: Create and Configure Amazon Route 53 Hosted Zone.
I did not find the explanations in the guide very clear, but I did what made sense to me, based on what I could see on the screen and on my previous experience in similar cases with other providers (other than AWS).
Anyone has tried this before and has something to point out?
For reference here is the kind of display I can see on Google Chrome:
This site can’t be reached
example.info’s server IP address could not be found.
Did you mean http://example.com/?
Search Google for example info
ERR_NAME_NOT_RESOLVED
In case something similar happens to someone else, here is what I did.
I finally solved the problem by following Step 5: Route DNS Traffic for Your Domain to Your Website Bucket
of this document:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/getting-started.html#getting-started-create-alias
creating an Alias record.
We have been working on a gaming website. Recently while making note of the major traffic sources I noticed a website that I found to be a carbon-copy of our website. It uses our logo,everything same as ours but a different domain name. It cannot be, that domain name is pointing to our domain name. This is because at several places links are like ccwebsite/our-links. That website even has links to some images as ccwebsite/our-images.
What has happened ? How could have they done that ? What can I do to stop this ?
There are a number of things they might have done to copy your site, including but not limited to:
Using a tool to scrape a complete copy of your site and place it on their server
Use their DNS name to point to your site
Manually re-create your site as their own
Respond to requests to their site by scraping yours real-time and returning that as the response
etc.
What can I do to stop this?
Not a whole lot. You can try to prevent direct linking to your content by requiring referrer headers for your images and other resources so that requests need to come from pages you serve, but 1) those can be faked and 2) not all browsers will send those so you'd break a small percentage of legitimate users. This also won't stop anybody from copying content, just from "deep linking" to it.
Ultimately, by having a website you are exposing that information to the internet. On a technical level anybody can get that information. If some information should be private you can secure that information behind a login or other authorization measures. But if the information is publicly available then anybody can copy it.
"Stopping this" is more of a legal/jurisdictional/interpersonal concern than a technical one I'm afraid. And Stack Overflow isn't in a position to offer that sort of advice.
You could run your site with some lightweight authentication. Just issue a cookie passively when they pull a page, and require the cookie to get access to resources. If a user visits your site and then the parallel site, they'll still be able to get in, but if a user only knows about the parallel site and has never visited the real site, they will just see a crap ton of broken links and images. This could be enough to discourage your doppelganger from keeping his site up.
Another (similar but more complex) option is to implement a CSRF mitigation. Even though this isn't a CSRF situation, the same mitigation will work. Essentially you'd issue a cookie as described above, but in addition insert the cookie value in the URLs for everything and require them to match. This requires a bit more work (you'll need a filter or module inserted into the pipeline) but will keep out everybody except your own users.
Before you start yelling at me, I know many users already asked for something like this, but I read all of them and couldn't find any reply related to my specific case: I eventually managed to get something working but it's not what I think I (and other developers) are looking for. I want to share my experience about this with all of you, so I'll try and describe my scenario and the steps I followed to look into how to take care of this, so please indulge me for this long post: I'm sure it will help some developers in the same situation as I am to clear their minds too, just as I hope it will give others the right information to help me (and others) with it.
I wrote a native Android application that makes use of the Facebook API. I DO NOT make use of the Facebook SDK, because I don't want to rely on the official app being installed on the device (as a matter of fact, my app is in part an alternative to that app so it would be silly to need it installed anyway in the first place), but I rather issue Graph API calls directly via HTTP and handle the responses myself. So if that is the answer you're thinking of giving me, please don't because I won't take that road.
As such, I made use of the Client-side authentication to authorize my app, displaying the URL in a WebView and getting the access_token at the end. I requested offline_access among the other permissions.
Since offline_access is going to be deprecated in May, I started investigating how to get long lived tokens anyway, and so read almost everything I could find related to that, including of course the official guidelines. Long story short, nothing worked for me, and I'm still stuck with very short-lived access_tokens that I can do nothing about.
This is what I did to begin:
Deprecated the offline_access for my app (well not THE app since it's being used by many users right now, but another one which is basically the same and I use for testing purposes only so that's the same thing) in the settings.
Authorized a user using Client-side authentication: https://www.facebook.com/dialog/oauth?client_id=MY_APP_ID&redirect_uri=http://my.domain.com/yeah.htmlscope=publish_stream,read_stream,user_photos,friends_photos,offline_access&response_type=token&display=wap
I got my access_token, but I immediately noticed how it was not long-lived at all, quite the opposite: expires_in was set to something like 6800 seconds (less than two hours). So the first assumption I had made (access_tokens will be longer lived by default) was already wrong.
I looked into how this access_token lifetime could be extended then, and tried almost every alternative out there. Needless to say, every attempt failed. That's what I tried, to be precise:
First of all, I of course tried the "official" approach, that is extending the token through the new endpoint. Skipping for now the rant about how stupid it is to request the client secret for such an operation (as many folks already pointed out, such secret would need to be embedded in the Android app, which is a security nightmare as far as we developers are concerned, and moving this bit server-side to extend the token life on behalf of the user is a nightmare for what concerns them instead, since they'd need to trust me with messing with their access_token), I tried issuing a GET request to that address using the correct parameters: https://graph.facebook.com/oauth/access_token?client_id=APP_ID&client_secret=APP_SECRET&grant_type=fb_exchange_token&fb_exchange_token=EXISTING_ACCESS_TOKEN ...The request was apparently successful, but it did NOT extend the lifetime of anything. The request just returned the same access_token as before, with an expires_in parameter that just reflected the sand of time flowing away (the same as before minus the seconds passed since I authorized). Basically, that method only told me how much the already available access_token would live, without refreshing or changing anything, so, despite the obvious security concerns it raises, it is pretty useless too.
I then tried what someone else suggested, that is using the old REST API to do the job, issuing a GET request to the following address: https://api.facebook.com/method/auth.extendSSOAccessToken?access_token=EXISTING_ACCESS_TOKEN which obviously failed too with the infamous "The access token was not obtained using single sign-on" error.
After those failed attempte, I started thinking about what may be the cause of all of them failing. As I anticipated, my app runs on Android devices but makes triggers HTTP requests to the API directly, which I guess may be the root of the problem.
In the advanced section of my developer apps page, my app was configured as "Web" rather than "Native/Desktop". That said, changing it to "Native/Desktop" did nothing but give me a longer-lived access_token at the first logout (about 24 hours rather than 1-2), while the already described attempts at extending its life failed just as before.
The official guideline has an interesting and quite creepy paragraph: "Desktop applications will not be able to extend the life of an existing access_token and the user must login to facebook once the token has expired". While this seems to have been overlooked by many, I started to think this may be the cause of my problems, so I tried an alternative approach, that is, I tried the server-side authentication rather than the client side one: again, this requires client_secret so would be a dumb solution for an Android app but I wanted to try that anyway. So, I got the code first, and then the access_token after that (as described in http://developers.facebook.com/docs/authentication/server-side/). This resulted in a much longer lived access_token (5183882 seconds, that is about 59 days), but then again, both the known means for extending it (even if not really needed in this case) resulted in the same thing: the former not refreshing anything, the latter complaining about the fact it was not obtained via SSO.
So, very long story short (I know, too late), the deadline for deprecating offline_access is so close you can feel it breathing on your neck, and nothing seems to work. What is your experience with all of this and, if you're on the same boat as I am and you managed to get it working, how did you do it?
Thanks for your patience.