How to forward my domain registered with AWS Route53 to Google My Business? - amazon-web-services

My domain: fishercoder.com is registered with AWS Route53.
Now I'd like to configure Google My Business to use this domain.
I searched on Google's doc and found that they do offer clear instructions on how to purchase a new domain through them, for third-party domain they listed instructions for GoDaddy, eNom and Network Solutions, but none for AWS Route53.
I thought it might be similar, so I tried to simulate what I can do on AWS Route 53 console, but didn't find any luck.
Any could share any ideas how to achieve this?
More details:
Right now, when people search "fisher coder", this page shows up: https://ibb.co/pRWjRc9, and if they click Website, it'll take them to the default Google My Business website which is not what I desired, I'd like to change it to point to my own domain: fishercoder.com
Thanks!

You can do it but it’s not a pretty solution. Not only that but a Google My Business Site (I assume this is what you mean) is so basic it’s not a good website replacement at all. It’s a good free option to set up because it’s free but other than that, it’s meant to keep people in Google, not to help you. You can only map custom domain buy from from business sites option given there.
Here’s how you do it:
Buy a domain wherever you prefer (I like Namecheap but Google Domains is also a good option).
Forward the domain to the Google Sites URL (many registrars will allow you to do this for free).
That’s it!
It’s not a pretty solution nor ideal because the URL of the Google Site will still be the original URL and they won’t stay on your custom domain at all.
So, simple description: if someone types in http://customdomain.com they will get forwarded to your Google URL and remain on that URL. It essentially just forwards to your Google Site, that’s it.
In AWS routes you will get option to forward domain. https://aws.amazon.com/premiumsupport/knowledge-center/redirect-domain-route-53/
This all information based on own experiment and study based on below link
Reference info : https://www.quora.com/How-can-I-attach-a-custom-domain-to-a-Google-Sites-website

Related

Forcing password on login with IAP and restrict domain

I've set up a Django/python web application running on Google Cloud Platform's Kubernetes Engine pods, and secured by GCP's Identity-Aware Proxy.
It all works great, but there are two things I'm not sure how to accomplish.
1) How can I restrict the users to a specific domain, just like the hd=my_domain.com URL parameter does on OAuth2 logging in? That makes the sign-in page only show emails with that domain in the list to click on.
2) How can I enforce that the user logs in with a password, instead of just simply clicking on the account? This is just like when you go to admin.google.com, or security.google.com and even though you're logged in, it forces a password. I know how to go to /gcp/clear_login_cookie to enforce a new login session when I want to log them out, but not sure how to enforce a password is entered. This I believe is called the "user presence test."
Any help is greatly appreciated, I've poured through documentation and have searched various ways on Stack Overflow to no avail.
Both of these items are on our roadmap, though I can't offer a specific timeline.
I don't see an entry in Issue Tracker for either of these. I'll try to remember to add that next week (at which point I'll add the links here), or you can do it yourself: https://issuetracker.google.com/issues/new?component=190831&template=1162609
Thanks for the suggestion, and sorry I don't have a better answer for you!
--Matthew, Cloud IAP engineering

A static website on AWS not acccessible

Somewhat curious about how to make a website on AWS, yesterday I went following this document:
https://aws.amazon.com/getting-started/projects/host-static-website/?c_1
in order to get started with something simple.
I clicked the button Get Started with the Implementation Guide and found myself here:
https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html
It went pretty well, except that even today, I still can't acccess the site with the expected URL (http://example.com).
For the sake of simplicity I decided to leave alone http://www.example.com for the time being.
Since the Step 2.5: Test Your Endpoint and Redirect could be performed without any problem;
I suspect that something went wrong when performing Step 3: Create and Configure Amazon Route 53 Hosted Zone.
I did not find the explanations in the guide very clear, but I did what made sense to me, based on what I could see on the screen and on my previous experience in similar cases with other providers (other than AWS).
Anyone has tried this before and has something to point out?
For reference here is the kind of display I can see on Google Chrome:
This site can’t be reached
example.info’s server IP address could not be found.
Did you mean http://example.com/?
Search Google for example info
ERR_NAME_NOT_RESOLVED
In case something similar happens to someone else, here is what I did.
I finally solved the problem by following Step 5: Route DNS Traffic for Your Domain to Your Website Bucket
of this document:
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/getting-started.html#getting-started-create-alias
creating an Alias record.

How to make google analytics track only domain without subomains?

I have two websites:
site.com
sub.site.com
First one has google analytics and the cookies have this domain: ".site.com". What happens is that my second site get these cookies posted which is not connected at all and I don't want that. Is there a way to achieve this? I think that changing ".site.com" domain, to "site.com" will work, but I am not 100% sure nor do I know how to do this with google analytics either.
There is actually an answer in another question, that is not the accepted one though.
We need to add this to the GA code:
pageTracker._setDomainName("none")
This will set the cookie domain to "site.com" instead of ".site.com" and it will not be posted to subdomains.

Parallel website running to my original website

We have been working on a gaming website. Recently while making note of the major traffic sources I noticed a website that I found to be a carbon-copy of our website. It uses our logo,everything same as ours but a different domain name. It cannot be, that domain name is pointing to our domain name. This is because at several places links are like ccwebsite/our-links. That website even has links to some images as ccwebsite/our-images.
What has happened ? How could have they done that ? What can I do to stop this ?
There are a number of things they might have done to copy your site, including but not limited to:
Using a tool to scrape a complete copy of your site and place it on their server
Use their DNS name to point to your site
Manually re-create your site as their own
Respond to requests to their site by scraping yours real-time and returning that as the response
etc.
What can I do to stop this?
Not a whole lot. You can try to prevent direct linking to your content by requiring referrer headers for your images and other resources so that requests need to come from pages you serve, but 1) those can be faked and 2) not all browsers will send those so you'd break a small percentage of legitimate users. This also won't stop anybody from copying content, just from "deep linking" to it.
Ultimately, by having a website you are exposing that information to the internet. On a technical level anybody can get that information. If some information should be private you can secure that information behind a login or other authorization measures. But if the information is publicly available then anybody can copy it.
"Stopping this" is more of a legal/jurisdictional/interpersonal concern than a technical one I'm afraid. And Stack Overflow isn't in a position to offer that sort of advice.
You could run your site with some lightweight authentication. Just issue a cookie passively when they pull a page, and require the cookie to get access to resources. If a user visits your site and then the parallel site, they'll still be able to get in, but if a user only knows about the parallel site and has never visited the real site, they will just see a crap ton of broken links and images. This could be enough to discourage your doppelganger from keeping his site up.
Another (similar but more complex) option is to implement a CSRF mitigation. Even though this isn't a CSRF situation, the same mitigation will work. Essentially you'd issue a cookie as described above, but in addition insert the cookie value in the URLs for everything and require them to match. This requires a bit more work (you'll need a filter or module inserted into the pipeline) but will keep out everybody except your own users.

Akamai Edgescape?

I was wondering if anyone has any links to on how to implement Akamai's Edgescape solution to get the zip code? I tried scouring the web for some sort of documentation from Akamai, but couldn't find any docs online, thought I would ask here first before contacting them.
If you have an Akamai account and have access to the control panel (https://control.akamai.com/), here is a document where you will find the information you need : https://control.akamai.com/dl/customers/ESCAPE/EdgeScape_users_guide.pdf
This sounds like an apples and oranges question. If you're using a CDN, by design, a percentage of requests that would normally be directed at your web server will be offloaded by the CDN. Of the total number of requests, those that make it through can be configured to provide the "True IP" of the client if you prefer.
As of 04/12 this is configured by adding the optional "Edge Services General" feature to your config, then enabling the "True Client IP Header".
As a bonus feature, if you're a Rails shop I'd suggest changing the name of the header to "Client-IP". If you do so, Rails will automatically use this header to determine the real ip for the user. This works as of 3.2.x, as documented here in ActionDispatch:: RemoteIp
Note: Rails appends the HTTP_ to the header :)