I uploaded a website into Google cloud platform > storage and set up the DNS and it goes to google then says an error message about the bucket (I can't reproduce the error message, but it doesn't go to the right location). Google gives me a link to the website and I can get to it from: https://storage.googleapis.com/pampierce.me/index.html
but https://pampierce.me/index.html doesn't work.
Currently, the DNS CNAME is set to: c.storage.googleapis.com. What should it be set to?
Or is the problem that I shouldn't put an HTML / CSS / JS only website in Storage? If so, then where / how?
Thanks.
The issue is with the name of the bucket, this is why it is not working.
I checked the CNAME record for www.pampierce.me and it points to c.storage.googleapis.com. but pampierce.me points to 91.195.240.103. Note that www.pampierce.me is not the same as pampierce.me. This is about DNS but in general this config is okay.
Actually, the real issue is with your bucket. As well you can create a bucket with the name pampierce.me, this does not work when using Cloud Storage to host a site and for this reason the bucket should be named www.pampierce.me. This is mentioned here.
Once you have created the bucket www.pampierce.me and set all the files and steps you have already done, everything should be working fine. Also the way to access is http://www.pampierce.me/index.html (note that as before is not the same as http://pampierce.me/index.html).
Finally you will notice that I say http and not https and the reason is because Cloud Storage does not supports SSL for hosting a website
In case you may want to access using https://pampierce.me (naked domain and HTTPS), I suggest to follow this tutorial but also implies to use a Load Balancer which also means extra cost. Also the issue is with Cloud Storage and App Engine is a different product.
Related
I am new to working with AWS and route 53 so any help is appreciated.
I have created an organization on GitHub, and then created a simple repository for a static site to display with Github pages. this is working as expected and I can see the static site at the URL generated by Github (something like: https://<githubOrgName>.github.io/<repoName>/)
I got a domain from AWS and now I'm trying to set it up so the apex domain (e.g. "my-domain.com") points to the Github pages site.
I followed the instructions found at: https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/about-custom-domains-and-github-pages ... but it doesn't seem to be working.
I am trying to make it so that the apex domain points to the repository Github page. something like:
https://my-domain.com -> https://<githubOrgName>.github.io/<repoName>/
... but this only shows a blank screen when I go to the root domain ("my-domain.com"). I have also tried to go to https://my-domain.com/<repoName>/... but this shows me a Github 404 page (so it seems to be correctly forwarding something to Github):
my AWS route 53 configuration is similar to the following (i have tried to remove sensitive details):
can anyone explain to me what I am doing wrong? I am new to working with domains so any help is appreciated.
Using Route53 alone won't help you there, because your target URL contains a URL path i.e. /<repoName>/.
DNS is a name resolution system and knows nothing about HTTP
Furthermore, the origin server (github.io) might be running a reverse proxy which might be parsing the request headers, among which is the Host header. You browser automatically sets this header to the url you feed it. Eventually, you send it the wrong header (i.e. https://my-domain.com/), which Github cannot process. You can explicitly set this header (e.g. via curl) to what Github is expecting, but I believe it's not what you and your users would like.
Instead, you could try using layer 7 redirects (301/302) with the help of Lambda#Edge (provided by AWS CloudFront). I have created a simple solution using the Serverless framework, which does the following redirects:
https://maslick.tech -> https://github.com/maslick
https://maslick.tech/cv -> https://www.linkedin.com/in/maslick/
https://maslick.tech/qa -> https://stackoverflow.com/users/2996867/maslick
https://maslick.tech/ig -> https://www.instagram.com/maslick/
But you can customize it by adjusting handler.js according to your needs. You might also need to create a free TLS certificate using AWS Certificate Manager in the us-east-1 region and attach it to your CloudFront distribution. But this is optional.
Lambda#Edge will give low latencies, since your redirects will be served from CloudFront's edge locations across the globe.
How I got it to work was:
Set a CNAME record from example.org to <USERNAME>/github.io. in the Route 53 console
Set Custom domain to example.org in the Github Pages settings for github.com/<USERNAME>/<REPO>
Note: You shouldn't be setting the CNAME record to <USERNAME>/github.io/<REPO>
Source: https://deanattali.com/blog/multiple-github-pages-domains/
I'm trying to configure the Google Cloud loadbalancer to do the following:
I have a website running on a Wordpress machine in a VM instance which I want users to access when they enter outairnet.com.
And I have a separate html file that I want users to access when they access outairnet.com/map.
WP is running on a compute engine VM, connected to a VM instance and to a backend service. The seperate html file is on a service bucket, connected to a backend bucket.
I've triedd to configure a very simple path forwarding rule, which made sense to me. But it just adds up to anyone trying to access outairnet.com/* gets to the WP (which is fine)
but accessing outairnet.com/map doesn't point to the storage bucket with the html file, however accessing outairnet.com/index.html does point to the separate html file.
My LB config looks like this.
I think I'm on to the problem but still can't solve it.
it looks like google console adds a /* rule even when I try to delete it.
so its a /* path rule that catches everything despite having a more specific rule like /mypath/* in addition.
but after removing it is just readded automatically for some reason. why?
It's possible - there are a few steps involved such as creating a bucket with your static page, adding it as a backend service in your load balancer and creating a new path-rule in it to redirect the requests.
And now the details:
Create a new bucket - pick the name you like (outairnet-static or something that will be meaningful to you so you don't delete by accident). You can ignore all the tutorials telling that it has to have the exact name of your domain - since it will only be hosting a file accessible under outairnet.com/mylink/ it will work regardless of the name used. I tested it.
Create a directory in your bucket named exactly ax the path under which you want it to be. If you want outairnet.com/mylink/ then directory's name has to be mylink. Upload your files into that directory. Name your main index file index.html unless you want to provide full file path.
Make the bucket avaialble to everyone.
Go to your LB configuration and edit backend services; add a new backend bucket.
Go to your Host and Path Rules and configure a new path; Enter the name of your site and the path (Remember that the path value must be /mylink/*.) and select the bucket you've just created from the dropdown list.
No changes necessary for the frontend. Save the changes and in a few moments it should be working.
I just added another path rule with just "/" directing to the VM and it seemed to do it, but now the only glitch is www.outairnet.com/map is fine but outairnet.com/map without www directs to the vm and not the bucket
Let's assume one has an Amazon S3 bucket example.com configured for static hosting. In the configuration, the console allows setting an index file and an optional error file. But I'm struggling to figure out how to add another page to the site. I thought this would be straight forward but I cannot find the answer in the official documentation or on the internet.
If I want to add one more page to the static site (e.g. example.com/page2) and there is a page2.html file already in the S3 bucket at the root, where is the correct place to make this routing configuration? Can it be done through the S3 console? Or does it need to be configured through some kind of a DNS record? As a further complication, this needs to also work with and without the www in the URL.
On the DNS side I currently have the following configuration:
CNAME | WWW | www.example.com.s3-website-east-1.amazonaws.com | TTL 30 min
URL Redirect Record | # | http://www.example.com unmasked
Are you trying to access the page at example.com/page2.html or example.com/page2
If you want to access the page at example.com/page2 then create a 'folder' called page2 off the root and in that folder put a file called index.html
If you want to use example.com/page2.html, then create a file called page2.html and put it in the 'root' of the bucket.
Simply create a file called page2.html. It will be accessible via example.com/page2.html.
No routing configuration is required.
The index file alias is only used if no page is specified (eg they go to example.com/).
As for mapping www.example.com to example.com, you would create another bucket with the name www.example.com and use "Redirect requests" to point back to example.com. (If using a CNAME works for you, that's probably easier, but test it first to see if it functions as expected. See: Mapping naked domain (www.domain.com) to static website which is saved in S3)
See: Configuring a static website using a custom domain registered with RouteĀ 53 (Follow the manual steps rather than automating via CloudFormation, so you can better understand what has been configured)
I am using Google cloud storage as CDN to store file for our website which is hosted on Fastly.
In case of PDF files, we are doing a redirect to URL of PDF file in google cloud storage.
Everything works fine except in case if the user manipulates the file location in URL (which is used to build create google storage object URL). In such case google storage display error message in XML format as follow:
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
</Error>
Such message is fine for dev environments however on production this is not something we can show to the user in a browser.
So I want to understand is there any provision in Google cloud storage to customize these messages and pages.
Thanks in advance,
Yogesh
The best way I know of to avoid this error would be to use GCS's static website hosting feature. To do so, you'd purchase some domain name, create a GCS bucket that matches that domain name, then specify the "NotFoundPage" property of the website configuration to be an object with whatever you'd like the appropriate error to be. The main downside here is that this would only work over HTTP, not HTTPS.
For more on how to set up static website config, see https://cloud.google.com/storage/docs/hosting-static-website
So we are using the Meteor browser-policy package, and using Amazon S3 to store content.
On the server we have setup the browser policy as follows:
BrowserPolicy.content.allowOriginForAll('*.amazonaws.com');
BrowserPolicy.content.allowOriginForAll('*.s3.amazonaws.com');
This works fine in local dev and in production when visiting our http:// site. However when using the https:// address to our site the AWS content no longer passes this policy.
The following error is put on the console
Refused to load the image 'http://our-bucket-name.s3.amazonaws.com/asset-stored-in-s3.png' because it violates the following Content Security Policy directive: "img-src data: 'self' *.google-analytics.com *.zencdn.net *.filepicker.io *.uservoice.com *.amazonaws.com *.s3.amazonaws.com".
As you can see we have some other origins allowed in the browser policy, these all seem to work fine in both http and https. AWS S3 is the only one that is failing.
I've tried Chrome, Firefox, and Safari and they all have the same issue.
Whats going on?
I may not have the exact answer to this question but I have some information which the community may find helpful.
First, you should avoid serving mixed content. I'm unclear if that would set off the browser policy alerts but you just shouldn't do it anyway. The easiest solution is to use a protocol-relative-url or just explicitly specify https in your url.
Second, I too assumed that the wildcard worked like a glob. However, I've been told that it works the same way as an ssl certificate rule - i.e. for all subdomains or for a specific subdomain. In other words, *.example.com and www.example.com, are valid but *.foo.example.com, isn't meaningful. I think you want to explicitly add your bucket like so:
BrowserPolicy.content.allowOriginForAll('our-bucket-name.s3.amazonaws.com')
unless you literally want to trust all of amazonaws.com.