Create Amazon Cloudfront distribution in a script (Windows) - amazon-web-services

Is there anywhere a decent full example of creating a distribution (ideally with more than one origin - S3 and an AMS) from the command line? I was a bit dismayed to find that it isn't a case of "aws cloudfront blah blah..."
In Windows, assuming no special tools - though any solution that needs a standalone exe is fine. I have been hinted to use cURL... but can't figure out all the stuff I need to pass in, or indeed how to use cURL to do so - have found it has a -h param for headers...but never used cURL so a bit lost.
Looked http://docs.aws.amazon.com/AmazonCloudFront/latest/APIReference/CreateDistribution.html but am bemused by the sketchiness of the 'example' e.g.
POST /2013-09-27/distribution HTTP/1.1
Host: cloudfront.amazonaws.com
Authorization: AWS authentication string
Date: Thu, 17 May 2012 19:37:58 GMT
Other required headers
...
Where do I find my AWS authentication string?
What are the "Other required headers"
Distribution ID I can find on the Cloudfront admin page on the web
I am totally lost - need the real beginners guide here, step by step, ideally cross referenced to the Cloudfront admin page on the web. I'm a C#/SQL desktop apps dev normally, so this is way out of comfort zone.

Have ended up using the Amazon SDK for .NET

Actually, it probably is a matter of aws cloudfront blah blah
When typing aws cloudfront in a recent version of the AWS CLI one gets:
This service is only available as a preview service.
However, if you'd like to use a basic set of cloudfront commands with the
AWS CLI, you can enable this service by adding the following to your CLI
config file:
[preview]
cloudfront=true
Following this advice, you get the command aws cloudfront create-distribution.
I am personally still struggling to find the correct input for required params like --distribution-config

Running aws configure set preview.cloudfront true did not work for me. In my case (Linux), the following worked:
Edit the file ~/.aws/config and add the content
[preview]
cloudfront=true
To its end. This topic helped, but not mentioned the [brackets].

Related

AWS Exception "The If-Match version is missing or not valid for the resource" when updating using web interface

When updating my AWS Cloudfront distribution using the AWS web interface, I get the following error: "The If-Match version is missing or not valid for the resource.".
I cannot disable, delete, update or do any actions on the distribution to my knowledge.
There's similar questions on StackOverflow, but they deal with the AWS console.
Have I done something, or have AWS miss-used their own API?
It was caused by the ClearURLs plugin I had installed in Chrome.
https://repost.aws/questions/QUdbuTO7zDRkWBHhQ6K7tDqA/cant-edit-the-if-match-version-is-missing-or-not-valid-for-the-resource

AWS Route 53 Domain Point To Github Pages

I am new to working with AWS and route 53 so any help is appreciated.
I have created an organization on GitHub, and then created a simple repository for a static site to display with Github pages. this is working as expected and I can see the static site at the URL generated by Github (something like: https://<githubOrgName>.github.io/<repoName>/)
I got a domain from AWS and now I'm trying to set it up so the apex domain (e.g. "my-domain.com") points to the Github pages site.
I followed the instructions found at: https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/about-custom-domains-and-github-pages ... but it doesn't seem to be working.
I am trying to make it so that the apex domain points to the repository Github page. something like:
https://my-domain.com -> https://<githubOrgName>.github.io/<repoName>/
... but this only shows a blank screen when I go to the root domain ("my-domain.com"). I have also tried to go to https://my-domain.com/<repoName>/... but this shows me a Github 404 page (so it seems to be correctly forwarding something to Github):
my AWS route 53 configuration is similar to the following (i have tried to remove sensitive details):
can anyone explain to me what I am doing wrong? I am new to working with domains so any help is appreciated.
Using Route53 alone won't help you there, because your target URL contains a URL path i.e. /<repoName>/.
DNS is a name resolution system and knows nothing about HTTP
Furthermore, the origin server (github.io) might be running a reverse proxy which might be parsing the request headers, among which is the Host header. You browser automatically sets this header to the url you feed it. Eventually, you send it the wrong header (i.e. https://my-domain.com/), which Github cannot process. You can explicitly set this header (e.g. via curl) to what Github is expecting, but I believe it's not what you and your users would like.
Instead, you could try using layer 7 redirects (301/302) with the help of Lambda#Edge (provided by AWS CloudFront). I have created a simple solution using the Serverless framework, which does the following redirects:
https://maslick.tech -> https://github.com/maslick
https://maslick.tech/cv -> https://www.linkedin.com/in/maslick/
https://maslick.tech/qa -> https://stackoverflow.com/users/2996867/maslick
https://maslick.tech/ig -> https://www.instagram.com/maslick/
But you can customize it by adjusting handler.js according to your needs. You might also need to create a free TLS certificate using AWS Certificate Manager in the us-east-1 region and attach it to your CloudFront distribution. But this is optional.
Lambda#Edge will give low latencies, since your redirects will be served from CloudFront's edge locations across the globe.
How I got it to work was:
Set a CNAME record from example.org to <USERNAME>/github.io. in the Route 53 console
Set Custom domain to example.org in the Github Pages settings for github.com/<USERNAME>/<REPO>
Note: You shouldn't be setting the CNAME record to <USERNAME>/github.io/<REPO>
Source: https://deanattali.com/blog/multiple-github-pages-domains/

Google: Permission denied to generate login hint for target domain NOT on localhost

I am trying to create a Google sign-in and getting the error:
Permission denied to generate login hint for target domain
Before you mark this a duplicate, this is not the same as the question asked at Google sign in website Error : Permission denied to generate login hint for target domain because in that case the questioner was on localhost, whereas I am getting this error on the server.
Specifically, I have included the url of the server in the Authorized Javascript Origins, as in the following image:
and when I get the error, the request shows that the same url was sent, as in the following image:
Is there something else I should be putting in my Restrictions page? Is there any way to figure out what is going on here? Is there a log at the developer console that can tell me what is happening?
Okay, I figured this out. I was using an IP address (as in "http://175.132.64.120") for the redirect uri, as this was a test site on the live server, and Google only accepts actual urls (as in "http://mycompany.com" or "http://localhost") as redirect uris.
Which, you know, THEY COULD HAVE SAID SOMEWHERE IN THE DOCUMENTATION, but whatever.
I know this is an old question, but it's the first result when you look for the problem via Google, so I'll share my solution with you guys.
When deploying Google OAuth service in a private network, namely some IP that can't be accessed via the Internet, you should use a magic DNS service, like xip.io that will give you an URL that your browser will resolve to your internal IP. You see, Google needs to be able to reach your authorized origin via your browser, that's why setting localhost works if you're serving it on your computer, but it won't work when you're deploying outside the Internet, as in a VPN, intranet, or with a tunnel.
So, the steps:
get your IP address, the one you're deploying at and it's not a public domain, let's say it's 10.0.0.1 as an example.
add http://10.0.0.1.xip.io to your Authorized Javascript Origins on the Google Developer Console.
open your site by visiting http://10.0.0.1.xip.io
clear your cache for the site, if necessary.
Log in with Google, and voilĂ .
I got to this solution using this answer in another question.
If you are using http://127.0.0.1/projects/testplateform, change it into http://localhost/projects/testplateform, it will work just fine.
If you testing in your machine (locally). then dont use the IP address (i.e. http://127.0.0.1:8888) in the Client ID configuration , but use the local host instead and it should work
Example: http://localhost:8888
To allow ip address to be used as valid javascript origin, first add an entry in your /etc/hosts file
10.0.0.1 mydevserver.com
and then add this domain mydeveserver.com in Authorized Javascript Origins. If you are using some nonstandard port, then specify it with your domain in Authorized Javascript Origins.
Note: Remove your cache and it will work.
Just ran across this same issue on an external test server, without a DNS entry yet. If you have permission on your local machine just edit your /etc/hosts file:
175.132.64.120 www.jimboweb.com
And use use http://www.jimboweb.com as an authorized domain.
I have a server in private net, ip 172.16.X.X
The problem was solved with app port ssh-forwarding to my localhost port.
Now I am able to use deployed app with google oauth browsing to localhost.
ssh -N -L8081:localhost:8080 ${user}#${host}
I also add localhost:8081 to "Authorized URI redirect" and "Authorized JavaScript sources" in console.developers.google.com:
google developers console
After battling with it for a few hours, I found out that my config in the Google Cloud console was all correct and similar to the answers provided. Due to caching issues or something, I had to recreate a OAuth Client ID and then it suddenly started working.
Its a pretty old issue, but I encountered it and there wasn't any helpful resource, as such I am posting my solution.
For me the issue was when I hosted my web-app locally, a using google-auth for logging in.
The URL I was trying to hit was :- http://127.0.0.1:8000/master
I just changed from IP to http://localhost:8000/master/
And it worked. I was able to log in to the website using Google Auth.
Hope this helps someone someday.
install xampp and run apache server,
put your files (index and co) in a folder in the xampp dir (c:\xampp\htdocs\yourfolder).
Type this in your browser url - http://localhost/yourfolder/index.html

AWS API Gateway won't open up

I created a "hello world" lambda function and then deployed it to an end-point using AWS's API Gateway:
All very much basic settings but I was sure to change the security to "open" and while i was told that it could take up for 15 minutes for the domain to resolve I found that even after 30 I was getting the following response from the "open" end-point:
{"message":"Missing Authentication Token"}
Am I missing something obvious? Shouldn't this have been available with what I did?
Note, it was pointed out that this image is of a PUT not a GET. I tried both and both came back errors. Just to check I've run GET and PUT through Postman and get a similar but not identical response:
and then GET ...
When I test the lambda function in the console it runs successfully but running it in the API Gateway it gives me a different articulation of the same error:
Tue Sep 29 20:57:43 UTC 2015 : Execution failed due to configuration error: Invalid permissions on Lambda function
and yet I used the default permissions that the console suggested. The lambda function itself is very basic and can be found here: code
I was having the exact some problem today. Whatever I did didn't work but finally figured out. turns out in order for the changes to take effect, you need to Deploy API.
So first go to Resources and click on Deploy API button. It will ask for a deployment stage. Once deployed I could call my API without any issues.
I know it's been a while since you posted the question, but thought it might come in handy for other people as well.
I had this same issue with a deployed API that was being hit frequently around midday the requests would stop working and fail with { Missing Authentication Token }
My issue was not the URL or a stage that was not deployed but I do know AWS throws that error for both of those reasons.
However I found a command to invalidate the cache of apigateway because in my case I was using a custom domain attached to cloudfront.
aws apigateway flush-stage-cache --rest-api-id 97y41psdkg --stage-name dev
After running this I stopped getting the { Missing Authentication Token }
You need to use "AWS Signature" under the Authorization tab in Postman. See this AWS guide on what to enter into those fields:
http://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-use-postman-to-call-api.html
Please use your resource name end of your api URL.
https://***********.execute-api.us-east-1.amazonaws.com/Stag/number
Here number is my resource name

Using Meteor browser-policy package allowOriginForAll for AWS works on http site but not https

So we are using the Meteor browser-policy package, and using Amazon S3 to store content.
On the server we have setup the browser policy as follows:
BrowserPolicy.content.allowOriginForAll('*.amazonaws.com');
BrowserPolicy.content.allowOriginForAll('*.s3.amazonaws.com');
This works fine in local dev and in production when visiting our http:// site. However when using the https:// address to our site the AWS content no longer passes this policy.
The following error is put on the console
Refused to load the image 'http://our-bucket-name.s3.amazonaws.com/asset-stored-in-s3.png' because it violates the following Content Security Policy directive: "img-src data: 'self' *.google-analytics.com *.zencdn.net *.filepicker.io *.uservoice.com *.amazonaws.com *.s3.amazonaws.com".
As you can see we have some other origins allowed in the browser policy, these all seem to work fine in both http and https. AWS S3 is the only one that is failing.
I've tried Chrome, Firefox, and Safari and they all have the same issue.
Whats going on?
I may not have the exact answer to this question but I have some information which the community may find helpful.
First, you should avoid serving mixed content. I'm unclear if that would set off the browser policy alerts but you just shouldn't do it anyway. The easiest solution is to use a protocol-relative-url or just explicitly specify https in your url.
Second, I too assumed that the wildcard worked like a glob. However, I've been told that it works the same way as an ssl certificate rule - i.e. for all subdomains or for a specific subdomain. In other words, *.example.com and www.example.com, are valid but *.foo.example.com, isn't meaningful. I think you want to explicitly add your bucket like so:
BrowserPolicy.content.allowOriginForAll('our-bucket-name.s3.amazonaws.com')
unless you literally want to trust all of amazonaws.com.