Flask routing to AWS Kubernetes internal domain for some paths - amazon-web-services

I have a Flask application running on an AWS Kubernetes instance and I have some rounting problems with some specific directories.
For example when I try to access external.domain.com/test/ all works fine while at the same time accessing external.domain.com/test tries to route me to to the internal.domain.local/test/ url which is obviously not accessible from the outside.
It seems that whenever Flask builds the builds the url, e.g. using url_for, this issue comes up.
I already tried to overwrite the SERVER_NAME property which seems to be used when building urls but then the server does not start stating:
Current server name 'internal.domain.local' doesn't match configured server name 'external.domain.com'.
I am not sure whether I should try to rewrite it in the Ingress configuration using nginx.ingress.kubernetes.io/proxy-redirect-from and nginx.ingress.kubernetes.io/proxy-redirect-to but unfortunately the documentation seems to be pretty sparse.
If anyone knows how to fix this issue, thank you in advance.

Related

GCP: load balancer rewrite path

I'm setting up traffic management for a global external HTTP(S) load balancers. I have two backend Cloud Run services, serverless-lb-www-service and serverless-lb-api-service, that I want to serve from the same IP/domain.
I want to configure them like this:
example.com -> serverless-lb-www-service
example.com/api -> serverless-lb-api-service
I can use the simple routing rules to serve traffic semi-expected:
path
backend
/*
serverless-lb-www-service
/api
serverless-lb-api-service
/api/*
serverless-lb-api-service
However, I'm running into an issue where I try to access an endpoint that is not the root API end, like example.com/api/test. I'm always seeing the response I would expect from example.com/api.
I believe it has something to do with my API (running express.js) receiving the path /api when it is instead expecting to serve that route from there just /test. I think I might need to set up a rewrite to remove /api when it hits the API
Any help would be much appreciated. Thanks
update
I can confirm that the requests as logged in the API are all prefixed with /api. I can solve my issue by changing all API route handlers to expect the /api prefix in production environment. However I would still rather do this via a path rewrite so application code is the same in all environments
You can customize the host and path rules. You can follow the steps through this link. It is also using Cloud Run services and might help you with the rewrite path issues.
Note: Just scroll all the way down if the link will not redirect and show the "Customize the host and path rules" steps.

How can I get django to work on https using elastic beanstalk and apache?

I have my .config files set up using the information available on aws and I have my load balancer listening on 443. My website is being served correctly via https when I connect using my elastic beanstalk url. Of course that url is not what my ssl certificate lists so there's an error but none the less, it is displaying all the html and static files. Https seems to be working there.
When I attempt to visit my custom domain using http everything also displays correctly so my application seems fine, but when I attempt https using my custom domain nothing is loaded from my server. I just get the "Index of /" page. This is what I receive when my ALLOWED_HOSTS is incorrect so I assume it's something super simple in my settings file that is blocking django from allowing apache to serve the content over https to my custom domain. Or else theres one other place I'm missing that needs me to register my domain with my load balancer? Is that a thing? I feel like I've been scouring the internet for help here so any suggestions are very much appreciated.
One other note is that I have all my static files being served via s3. That bucket actually does get loaded correctly when I visit my website's custom url over https... Not sure if that's a clue or just even more confusing.
Serving my static files via s3 lead me to omit the below as I wasn't quite sure what to do with it....
Alias /static/ /opt/python/current/app/static/
from the example listed here
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-python.html
Again, everything seems to be working via the https://[...]elasticbeantalk.com with an expected
ERR_CERT_COMMON_NAME_INVALID
Not sure why I'm getting "Index of /" when visiting my custom domain over https. Http works fine too.
I kind of figured it out in asking that question...
No where in any tutorial had I read anything about creating a dns entry that aliased my load balancer to my domain name... This info solved it for me-
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html
Check out this post about forcing HTTPS with django and elastic beanstalk. This solution only works if your elastic beanstalk environment has an application load balancer (as opposed to classic load balancer)
https://medium.com/#Pibastte/how-to-setup-http-to-https-redirection-for-a-django-application-on-aws-elastic-beanstalk-and-have-de44cf05565

Google: Permission denied to generate login hint for target domain NOT on localhost

I am trying to create a Google sign-in and getting the error:
Permission denied to generate login hint for target domain
Before you mark this a duplicate, this is not the same as the question asked at Google sign in website Error : Permission denied to generate login hint for target domain because in that case the questioner was on localhost, whereas I am getting this error on the server.
Specifically, I have included the url of the server in the Authorized Javascript Origins, as in the following image:
and when I get the error, the request shows that the same url was sent, as in the following image:
Is there something else I should be putting in my Restrictions page? Is there any way to figure out what is going on here? Is there a log at the developer console that can tell me what is happening?
Okay, I figured this out. I was using an IP address (as in "http://175.132.64.120") for the redirect uri, as this was a test site on the live server, and Google only accepts actual urls (as in "http://mycompany.com" or "http://localhost") as redirect uris.
Which, you know, THEY COULD HAVE SAID SOMEWHERE IN THE DOCUMENTATION, but whatever.
I know this is an old question, but it's the first result when you look for the problem via Google, so I'll share my solution with you guys.
When deploying Google OAuth service in a private network, namely some IP that can't be accessed via the Internet, you should use a magic DNS service, like xip.io that will give you an URL that your browser will resolve to your internal IP. You see, Google needs to be able to reach your authorized origin via your browser, that's why setting localhost works if you're serving it on your computer, but it won't work when you're deploying outside the Internet, as in a VPN, intranet, or with a tunnel.
So, the steps:
get your IP address, the one you're deploying at and it's not a public domain, let's say it's 10.0.0.1 as an example.
add http://10.0.0.1.xip.io to your Authorized Javascript Origins on the Google Developer Console.
open your site by visiting http://10.0.0.1.xip.io
clear your cache for the site, if necessary.
Log in with Google, and voilĂ .
I got to this solution using this answer in another question.
If you are using http://127.0.0.1/projects/testplateform, change it into http://localhost/projects/testplateform, it will work just fine.
If you testing in your machine (locally). then dont use the IP address (i.e. http://127.0.0.1:8888) in the Client ID configuration , but use the local host instead and it should work
Example: http://localhost:8888
To allow ip address to be used as valid javascript origin, first add an entry in your /etc/hosts file
10.0.0.1 mydevserver.com
and then add this domain mydeveserver.com in Authorized Javascript Origins. If you are using some nonstandard port, then specify it with your domain in Authorized Javascript Origins.
Note: Remove your cache and it will work.
Just ran across this same issue on an external test server, without a DNS entry yet. If you have permission on your local machine just edit your /etc/hosts file:
175.132.64.120 www.jimboweb.com
And use use http://www.jimboweb.com as an authorized domain.
I have a server in private net, ip 172.16.X.X
The problem was solved with app port ssh-forwarding to my localhost port.
Now I am able to use deployed app with google oauth browsing to localhost.
ssh -N -L8081:localhost:8080 ${user}#${host}
I also add localhost:8081 to "Authorized URI redirect" and "Authorized JavaScript sources" in console.developers.google.com:
google developers console
After battling with it for a few hours, I found out that my config in the Google Cloud console was all correct and similar to the answers provided. Due to caching issues or something, I had to recreate a OAuth Client ID and then it suddenly started working.
Its a pretty old issue, but I encountered it and there wasn't any helpful resource, as such I am posting my solution.
For me the issue was when I hosted my web-app locally, a using google-auth for logging in.
The URL I was trying to hit was :- http://127.0.0.1:8000/master
I just changed from IP to http://localhost:8000/master/
And it worked. I was able to log in to the website using Google Auth.
Hope this helps someone someday.
install xampp and run apache server,
put your files (index and co) in a folder in the xampp dir (c:\xampp\htdocs\yourfolder).
Type this in your browser url - http://localhost/yourfolder/index.html

Cubesviewer configuration for proper authentication

I'm trying to configure cubesviewer and try out the setup.
I've got the app installed running, along with cubes slicer app too.
However, when I visit the home page
http://127.0.0.1:8000/cubesviewer/
it fails popping up an error "Error occurred while accessing the data server"
Debugging with the browser console, shows a http status 403 error with the url http://localhost:8000/cubesviewer/view/list/
After some googling and reading, I figured I'll need to add rest frame auth settings. (as mentioned here.).
Now after running migrate and runserver, I get 401 error on that url.
Clearly I'm missing something with settings.py , Can somebody help me out.
I'm using the cubesviewer tag v0.10 from the github repo.
And find my settings here. http://dpaste.com/2G5VB5K
P.S: I've verified Cubes slicer works separately on its' own.
I have reproduced this. This is error may occur when you use different URL to access a website and to access related resources. For security reasons, browsers allow to access resources from exactly the same host as the page you are viewing.
Seems you are accessing the app via http://127.0.0.1:8000, but you have configured CubesViewer to tell clients to access the data backend via http://localhost:8000. While it's the same IP address, they are different strings.
Try accessing the app as http://localhost:8000.
If you deploy to a different server, you need to adjust settings. Here are the relevant configuration options, now with more comments:
# Base Cubes Server URL.
# Your Cubes Server needs to be running and listening on this URL, and it needs
# to be accessible to clients of the application.
CUBESVIEWER_CUBES_URL="http://localhost:5000"
# CubesViewer Store backend URL. It should point to this application.
# Note that this must match the URL that you use to access the application,
# otherwise you may hit security issues. If you access your server
# via http://localhost:8000, use the same here. Note that 127.0.0.1 and
# 'localhost' are different strings for this purpose. (If you wish to accept
# requests from different URLs, you may need to add CORS support).
CUBESVIEWER_BACKEND_URL="http://localhost:8000/cubesviewer"
Alternatively, you could change CUBESVIEWER_BACKEND_URL to "http://127.0.0.1:8000/cubesviewer" but I recommend you to use hostnames and not IP addresses for this.
Finally, I haven't yet tested with CORS support, but check this pull request if you wish to try that approach.

Connecting a DD-WRT router to a Squid proxy running on AWS

I am trying to get a Linksys router with the latest DD-WRT (v24-sp2) in my house connected, via Comcast, to an external Squid (v3) proxy that I am running on AWS. When I connect over the WiFi to the DD-WRT router, it connects to the Squid proxy, but I get the nasty message (abbreviated here to show relevant part):
While trying to retrieve the URL: /
Note the backlash. I get this when I go to a root domain, like www.cnn.com. If I go to a page under a site, like www.cnn.com/today (fake link used for example only), that returns and error like:
While trying to retrieve the URL: /today
Again, notice the "/today", as if the root domain has been removed, and the string to the right of the domain name is being searched on.
For some background, I have installed Squid as generally as possible, and have done it on two servers with the same results. I get this same error no matter what domain I go to. Also, if I switch my network on my Mac to use this Squid proxy, it works fine. Only the connections from the DD-WRT give this error.
I have tried the instructions on the DD-WRT site with no luck. Others seem to have gotten this working well, so I assume I am making a configuration mistake.
Any clues for me? TIA...