Compojure - how can I get the servers own IP? - clojure

My elastic beanstalk app written in clojure using the compojure framework dispatches an HTML document with a java-script that does regular timed refreshes of an element within the document, for which it has to query back to the server.
Only I don't like the idea of putting any url anywhere in that code, it would be a bit of a hassle to change. I could make it a config parameter so I could set it in the elastic beanstalk configuration, but I figure there should be a way to get my public IP by code. Only, I can't seem to find anything about that.
Is there a way to get your own public IP from within the ring server?

Elastic Beanstalk should set X-Forwarded-Host header in your request which should contain the host hostname you can use in your application.
Excerpt from an example request:
{:headers
{"x-forwarded-host"
"default-environment.adfadsbxczvdf.us-east-1.elasticbeanstalk.com"}}

Related

GCP: load balancer rewrite path

I'm setting up traffic management for a global external HTTP(S) load balancers. I have two backend Cloud Run services, serverless-lb-www-service and serverless-lb-api-service, that I want to serve from the same IP/domain.
I want to configure them like this:
example.com -> serverless-lb-www-service
example.com/api -> serverless-lb-api-service
I can use the simple routing rules to serve traffic semi-expected:
path
backend
/*
serverless-lb-www-service
/api
serverless-lb-api-service
/api/*
serverless-lb-api-service
However, I'm running into an issue where I try to access an endpoint that is not the root API end, like example.com/api/test. I'm always seeing the response I would expect from example.com/api.
I believe it has something to do with my API (running express.js) receiving the path /api when it is instead expecting to serve that route from there just /test. I think I might need to set up a rewrite to remove /api when it hits the API
Any help would be much appreciated. Thanks
update
I can confirm that the requests as logged in the API are all prefixed with /api. I can solve my issue by changing all API route handlers to expect the /api prefix in production environment. However I would still rather do this via a path rewrite so application code is the same in all environments
You can customize the host and path rules. You can follow the steps through this link. It is also using Cloud Run services and might help you with the rewrite path issues.
Note: Just scroll all the way down if the link will not redirect and show the "Customize the host and path rules" steps.

How can I get django to work on https using elastic beanstalk and apache?

I have my .config files set up using the information available on aws and I have my load balancer listening on 443. My website is being served correctly via https when I connect using my elastic beanstalk url. Of course that url is not what my ssl certificate lists so there's an error but none the less, it is displaying all the html and static files. Https seems to be working there.
When I attempt to visit my custom domain using http everything also displays correctly so my application seems fine, but when I attempt https using my custom domain nothing is loaded from my server. I just get the "Index of /" page. This is what I receive when my ALLOWED_HOSTS is incorrect so I assume it's something super simple in my settings file that is blocking django from allowing apache to serve the content over https to my custom domain. Or else theres one other place I'm missing that needs me to register my domain with my load balancer? Is that a thing? I feel like I've been scouring the internet for help here so any suggestions are very much appreciated.
One other note is that I have all my static files being served via s3. That bucket actually does get loaded correctly when I visit my website's custom url over https... Not sure if that's a clue or just even more confusing.
Serving my static files via s3 lead me to omit the below as I wasn't quite sure what to do with it....
Alias /static/ /opt/python/current/app/static/
from the example listed here
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-python.html
Again, everything seems to be working via the https://[...]elasticbeantalk.com with an expected
ERR_CERT_COMMON_NAME_INVALID
Not sure why I'm getting "Index of /" when visiting my custom domain over https. Http works fine too.
I kind of figured it out in asking that question...
No where in any tutorial had I read anything about creating a dns entry that aliased my load balancer to my domain name... This info solved it for me-
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-elb-load-balancer.html
Check out this post about forcing HTTPS with django and elastic beanstalk. This solution only works if your elastic beanstalk environment has an application load balancer (as opposed to classic load balancer)
https://medium.com/#Pibastte/how-to-setup-http-to-https-redirection-for-a-django-application-on-aws-elastic-beanstalk-and-have-de44cf05565

How to tell AWS application load balancer to not forward the path pattern?

I have configured my AWS application load balancer to have the following rules:
/images/* forward to server A (https://servera.com)
/videos/* forward to server B (https://serverb.com)
And this is correctly forwarding to the respective servers. However, I don't want the load balancer to forward the request as https://servera.com/images & https://serverb.com/videos. I just want the respective servers to be hit without the path pattern as https://servera.com & https://serverb.com (without the path patterns in the request).
I don't want to modify my request parameters or change my server side code for this. Is there a way I can tell the application load balancer to not forward the path patterns?
Is there a way I can tell the application load balancer to not forward the path patterns?
No, there isn't. It's using the pattern to match the request, but it doesn't modify the request.
I don't want to modify my request parameters or change my server side code for this.
You'll have to change something.
You shouldn't have to change your actual code. If you really need this behavior, you should be able to accomplish it using the web server configuration -- an internal path rewrite before the request is handed off to the application by the web server should be a relatively trivial reconfigurarion in Nginx, Apache, HAProxy, or whatever is actually listening on the instances.
Also, it seems to me that you are making things difficult on yourself by wanting the server to respond to a path different than what is requested by the browser. Such a configuration will tend to make it more difficult to ensure correct test results and correct handling of relative and absolute paths, since the applications will have an inaccurate internal representation of what the browser is requesting or will need to request.

DNS hosting public and web application on different hosts

Here is my setup.
Public site hosted by squarespace.com (www.example-domain.com)
Web application (AWS EC2/ELB), i would like to be available via the same domain. (my.example-domain.com)
Custom profile pages available as www.example-domain.com/username
My question is how can i setup the DNS to achieve this? If can't do it just through DNS, any suggestions? The problem i am facing is that if squarespace.com is handling the www.example-domain.com traffic how can i have it only partially handle it for certain urls. Maybe i am going about this in the wrong was all together though.
The two first are ok. As you mention, (1) is not compatible with (3) for a pure DNS config as www of example-domain.com has to be configured to a single end-point.
Some ideas of non-DNS workaround:
Having the squarespace.com domain on sqsp.example-domain.com and configure your www domain to a custom web server on which you configure the root (/) to redirect (HTTP 300) to sqsp.example-domain.com. It will be quite transparent for the user, except in his browser address.
The same but setting on / a full page HTML iframe containing sqsp.example-domain.com.
The iframe approach is a "less clean", Google the solutions to build your opinion.
EDIT:
As #mike-ryan mentioned, there is the proxy solution as well where you configure you web server to request another server to get the content to return to your user. If you are already using AWS, a smart way to do this is to use CloudFront: you can setup CloudFront to proxy one server on one URL and proxy another server on other URL. Actually, this is maybe the faster to way to implement you need. Of course, a proxy is one more "hop", so it may add more delay.
If you really want to have content served from different servers while only using a single domain name, you'll need to set up a proxy server to handle the request routing for you. I am assuming your custom profile pages must be served from your EC2 instance.
Nginx will receive all requests, and will then decide whether they should be sent to Square Space or your web app. Requests will be reverse proxied to Square Space or to your app, depending on the URL.
This is similar to #smad's answer, except it will all be invisible to the users which IMHO is better than redirecting the user to a new domain name.
Example steps:
Set up an Nginx server, create two virtual hosts - one for my.example.com, and one for www.example.com
Create two upstreams in your Nginx config - one for Square Space, and one for your app
Configure the www.example.com virtual host to reverse proxy connections to the Square Space upstream, if the URL is "/". Otherwise, traffic should be proxied to your app upstream [0]
Configure the my.example.com virtual host to proxy all traffic to your app upstream
[0] how to reverse proxy via nginx a specific url?

Connecting a DD-WRT router to a Squid proxy running on AWS

I am trying to get a Linksys router with the latest DD-WRT (v24-sp2) in my house connected, via Comcast, to an external Squid (v3) proxy that I am running on AWS. When I connect over the WiFi to the DD-WRT router, it connects to the Squid proxy, but I get the nasty message (abbreviated here to show relevant part):
While trying to retrieve the URL: /
Note the backlash. I get this when I go to a root domain, like www.cnn.com. If I go to a page under a site, like www.cnn.com/today (fake link used for example only), that returns and error like:
While trying to retrieve the URL: /today
Again, notice the "/today", as if the root domain has been removed, and the string to the right of the domain name is being searched on.
For some background, I have installed Squid as generally as possible, and have done it on two servers with the same results. I get this same error no matter what domain I go to. Also, if I switch my network on my Mac to use this Squid proxy, it works fine. Only the connections from the DD-WRT give this error.
I have tried the instructions on the DD-WRT site with no luck. Others seem to have gotten this working well, so I assume I am making a configuration mistake.
Any clues for me? TIA...