Cloudfront SSL issue on Canvas LMS. Broken links - amazon-web-services

Does anybody have an experience deploying Canvas on AWS Lightsail?
I'm struggling with configuring canvas config and domain.yml - I always get broken links while accessing canvas via https://domainname.cloudfrontetc... Also file uploading and invitation links don't work correctly.
It seems my problem is similar to the one described here Cloudfront SSL issue on wordpress. Too many redirects
But in my case it's not the WordPress and I don't have the php config file.
Greg
https
brokenlink

Related

AWS Route 53 Domain Point To Github Pages

I am new to working with AWS and route 53 so any help is appreciated.
I have created an organization on GitHub, and then created a simple repository for a static site to display with Github pages. this is working as expected and I can see the static site at the URL generated by Github (something like: https://<githubOrgName>.github.io/<repoName>/)
I got a domain from AWS and now I'm trying to set it up so the apex domain (e.g. "my-domain.com") points to the Github pages site.
I followed the instructions found at: https://docs.github.com/en/pages/configuring-a-custom-domain-for-your-github-pages-site/about-custom-domains-and-github-pages ... but it doesn't seem to be working.
I am trying to make it so that the apex domain points to the repository Github page. something like:
https://my-domain.com -> https://<githubOrgName>.github.io/<repoName>/
... but this only shows a blank screen when I go to the root domain ("my-domain.com"). I have also tried to go to https://my-domain.com/<repoName>/... but this shows me a Github 404 page (so it seems to be correctly forwarding something to Github):
my AWS route 53 configuration is similar to the following (i have tried to remove sensitive details):
can anyone explain to me what I am doing wrong? I am new to working with domains so any help is appreciated.
Using Route53 alone won't help you there, because your target URL contains a URL path i.e. /<repoName>/.
DNS is a name resolution system and knows nothing about HTTP
Furthermore, the origin server (github.io) might be running a reverse proxy which might be parsing the request headers, among which is the Host header. You browser automatically sets this header to the url you feed it. Eventually, you send it the wrong header (i.e. https://my-domain.com/), which Github cannot process. You can explicitly set this header (e.g. via curl) to what Github is expecting, but I believe it's not what you and your users would like.
Instead, you could try using layer 7 redirects (301/302) with the help of Lambda#Edge (provided by AWS CloudFront). I have created a simple solution using the Serverless framework, which does the following redirects:
https://maslick.tech -> https://github.com/maslick
https://maslick.tech/cv -> https://www.linkedin.com/in/maslick/
https://maslick.tech/qa -> https://stackoverflow.com/users/2996867/maslick
https://maslick.tech/ig -> https://www.instagram.com/maslick/
But you can customize it by adjusting handler.js according to your needs. You might also need to create a free TLS certificate using AWS Certificate Manager in the us-east-1 region and attach it to your CloudFront distribution. But this is optional.
Lambda#Edge will give low latencies, since your redirects will be served from CloudFront's edge locations across the globe.
How I got it to work was:
Set a CNAME record from example.org to <USERNAME>/github.io. in the Route 53 console
Set Custom domain to example.org in the Github Pages settings for github.com/<USERNAME>/<REPO>
Note: You shouldn't be setting the CNAME record to <USERNAME>/github.io/<REPO>
Source: https://deanattali.com/blog/multiple-github-pages-domains/

The page at https://lyrics-chords.herokuapp.com/ was not allowed to display insecure content from http://localhost:8000/auth/user

I've just finished creating a Django-React app and have pushed the changes to Heroku. The frontend (JS and CSS) appear on the website no problem, but requests to the backend result in the following error:
[blocked] The page at https://lyrics-chords.herokuapp.com/ was not allowed to display insecure content from http://localhost:8000/auth/user
I've consulted the Internet but no one seems to be getting the same error message. Consulting a friend, it seems as if I have to https secure my backend, and futher researching the subject, it seems that there is no free way to upload a SSL/TSL certificate (reference: heroku: set SSL certificates on Free Plan?). Is there a solution to this?
Silly me, really. Turns out, localhost:8000 refers to the computer of the user. https://lyrics-chords.herokuapp.com/ is the server for both the backend and frontend, so updating the backend end URL calls sufficed.

How can I get django to work on https using elastic beanstalk and apache?

I have my .config files set up using the information available on aws and I have my load balancer listening on 443. My website is being served correctly via https when I connect using my elastic beanstalk url. Of course that url is not what my ssl certificate lists so there's an error but none the less, it is displaying all the html and static files. Https seems to be working there.
When I attempt to visit my custom domain using http everything also displays correctly so my application seems fine, but when I attempt https using my custom domain nothing is loaded from my server. I just get the "Index of /" page. This is what I receive when my ALLOWED_HOSTS is incorrect so I assume it's something super simple in my settings file that is blocking django from allowing apache to serve the content over https to my custom domain. Or else theres one other place I'm missing that needs me to register my domain with my load balancer? Is that a thing? I feel like I've been scouring the internet for help here so any suggestions are very much appreciated.
One other note is that I have all my static files being served via s3. That bucket actually does get loaded correctly when I visit my website's custom url over https... Not sure if that's a clue or just even more confusing.
Serving my static files via s3 lead me to omit the below as I wasn't quite sure what to do with it....
Alias /static/ /opt/python/current/app/static/
from the example listed here
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-python.html
Again, everything seems to be working via the https://[...]elasticbeantalk.com with an expected
ERR_CERT_COMMON_NAME_INVALID
Not sure why I'm getting "Index of /" when visiting my custom domain over https. Http works fine too.
I kind of figured it out in asking that question...
No where in any tutorial had I read anything about creating a dns entry that aliased my load balancer to my domain name... This info solved it for me-
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html
Check out this post about forcing HTTPS with django and elastic beanstalk. This solution only works if your elastic beanstalk environment has an application load balancer (as opposed to classic load balancer)
https://medium.com/#Pibastte/how-to-setup-http-to-https-redirection-for-a-django-application-on-aws-elastic-beanstalk-and-have-de44cf05565

Invoking a Lambda through API-Gateway giving 403 response?

I am using AWS codestar to deploy by react application using serverless nodejs template. This is the url that is given by codestar after successfully completion of all the stages https://xxxxx.execute-api.us-east-1.amazonaws.com/Prod . This url displayed all the components in my app correctly. In navbar of my app i have items like this a ,b,c. where clicking on each one of them will redirect to a new component.(i.e.https://xxxxx.execute-api.us-east-1.amazonaws.com/a,https://xxxxx.execute-api.us-east-1.amazonaws.com/b etc. But when i refresh the page which is having a url like this https://xxxxx.execute-api.us-east-1.amazonaws.com/b i am getting a error like {"message":"Forbidden"} and in my console it is showing like this favicon.ico:1 GET https://xxxx.execute-api.us-east-1.amazonaws.com/favicon.ico 403
It seems the chrome is fetching the favicon based on the https link, which fails because there is no such favicon at the location. I tried to remove favicon.ico link in index.html but even then the chrome is using the same url to fetch the favicon which eventually fails. I followed max number of suggestions in SO to acheive this but no luck. Is there any way to say api-gateway to exclude these favicon get requests and display my app rather than showing message forbidden.
And i am pretty sure that i had enabled logs for both the agi-gateway and lambda where i didnt find any forbidden errors(i.e.403) which is weird because i can see those 403 errors in my console.
Thanks
Any help is highly appreciated.
The https://xxxxx.execute-api.us-east-1.amazonaws.com/Prod url provided by API Gateway is the base url for your site, so those paths would have to be /Prod/a instead of /a.
One way to get around that is to register your own domain and connect it to API Gateway via a custom domain. That would allow you to have https://example.com as your base url, and your paths could stay /a, /b, etc.

Using Meteor browser-policy package allowOriginForAll for AWS works on http site but not https

So we are using the Meteor browser-policy package, and using Amazon S3 to store content.
On the server we have setup the browser policy as follows:
BrowserPolicy.content.allowOriginForAll('*.amazonaws.com');
BrowserPolicy.content.allowOriginForAll('*.s3.amazonaws.com');
This works fine in local dev and in production when visiting our http:// site. However when using the https:// address to our site the AWS content no longer passes this policy.
The following error is put on the console
Refused to load the image 'http://our-bucket-name.s3.amazonaws.com/asset-stored-in-s3.png' because it violates the following Content Security Policy directive: "img-src data: 'self' *.google-analytics.com *.zencdn.net *.filepicker.io *.uservoice.com *.amazonaws.com *.s3.amazonaws.com".
As you can see we have some other origins allowed in the browser policy, these all seem to work fine in both http and https. AWS S3 is the only one that is failing.
I've tried Chrome, Firefox, and Safari and they all have the same issue.
Whats going on?
I may not have the exact answer to this question but I have some information which the community may find helpful.
First, you should avoid serving mixed content. I'm unclear if that would set off the browser policy alerts but you just shouldn't do it anyway. The easiest solution is to use a protocol-relative-url or just explicitly specify https in your url.
Second, I too assumed that the wildcard worked like a glob. However, I've been told that it works the same way as an ssl certificate rule - i.e. for all subdomains or for a specific subdomain. In other words, *.example.com and www.example.com, are valid but *.foo.example.com, isn't meaningful. I think you want to explicitly add your bucket like so:
BrowserPolicy.content.allowOriginForAll('our-bucket-name.s3.amazonaws.com')
unless you literally want to trust all of amazonaws.com.