How to use CGI to Determine if URL Request is using HTTPS? - coldfusion

I am trying to switch our site from HTTP to HTTPS. In some scenarios, we need the site to use HTTP and at other times, HTTPS. I inted to use CGI to determine whether the request is HTTP or HTTPS.
As far as I can tell, the JSON requests must match the original protocol request. If you request, HTTP:// example.org you must call JSON with HTTP:// example.org /file.JSON. If you request, HTTPS:// example.org/ you must call JSON with HTTPS:// example.org/file.JSON.
Normally, I would use CGI variables to tell me whether the request is HTTP or HTTPS. I can test for CGI.HTTPS to see if it is on or off. I can check CGI.SERVER_PORT too see if it is 80 or 443. I can check CGI.SERVER_PORT_SECURE to see if it is 0 or 1.
When I view our web site in every browser, I can dump the CGI variables and get what I expect 100% of the time.
When a few other people in our office and outside our office make the same request, they get CGI variable values that suggest their request is NOT secure. CGI.HTTPS will show off. CGI.SERVER_PORT will show 40. CGI.SERVER_PORT_SECURE will show 0. Every other indicator will show that the site is secure in every browser, but the CGI variable values say it's not secure.
The site behaves flawlessly 100% for everyone for dev and stage. Only in live, which is behind a load balancer, does this issue exist (for some people).
Is this a load balancer issue? Is this certificate settings issue? Why are my CGI variables lying to me? How can I work around this issue?

Related

Keystone session cookie only working on localhost

Edit:
After investigating this further, it seems cookies are sent correctly on most API requests. However something happens in the specific request that checks if the user is logged in and it always returns null. When refreshing the browser a successful preflight request is sent and nothing else, even though there is a session and a valid session cookie.
Original question:
I have a NextJS frontend authenticating against a Keystone backend.
When running on localhost, I can log in and then refresh the browser without getting logged out, i.e. the browser reads the cookie correctly.
When the application is deployed on an external server, I can still log in, but when refreshing the browser it seems no cookie is found and it is as if I'm logged out. However if I then go to the Keystone admin UI, I am still logged in.
In the browser settings, I can see that for localhost there is a "keystonejs-session" cookie being created. This is not the case for the external server.
Here are the session settings from the Keystone config file.
The value of process.env.DOMAIN on the external server would be for example example.com when Keystone is deployed to admin.example.com. I have also tried .example.com, with a leading dot, with the same result. (I believe the leading dot is ignored in newer specifications.)
const sessionConfig = {
maxAge: 60 * 60 * 24 * 30,
secret: process.env.COOKIE_SECRET,
sameSite: 'lax',
secure: true,
domain: process.env.DOMAIN,
path: "/",
};
const session = statelessSessions(sessionConfig);
(The session object is then passed to the config function from #keystone-6/core.)
Current workaround:
I'm currently using a workaround which involves routing all API requests to '/api/graphql' and rewriting that request to the real URL using Next's own rewrites. Someone recommended this might work and it does, sort of. When refreshing the browser window the application is still in a logged-out state, but after a second or two the session is validated.
To use this workaround, add the following rewrite directive to next.config.js
rewrites: () => [
{
source: '/api/graphql',
destination:
process.env.NODE_ENV === 'development'
? `http://localhost:3000/api/graphql`
: process.env.NEXT_PUBLIC_BACKEND_ENDPOINT,
},
],
Then make sure you use this URL for queries. In my case that's the URL I feed to createUploadLink().
This workaround still means constant error messages in the logs since relative URLs are not supposed to work. I would love to see a proper solution!
It's hard to know what's happening for sure without knowing more about your setup. Inspecting the requests and responses your browser is making may help figure this out. Look in the "network" tab in your browser dev tools. When you make make the request to sign in, you should see the cookie being set in the headers of the response.
Some educated guesses:
Are you accessing your external server over HTTPS?
They Keystone docs for the session API mention that, when setting secure to true...
[...] the cookie is only sent to the server when a request is made with the https: scheme (except on localhost)
So, if you're running your deployed env over plain HTTP, the cookie is never set, creating the behaviour you're describing. Somewhat confusingly, in development the flag is ignored, allowing it to work.
A similar thing can happen if you're deploying behind a proxy, like nginx:
In this scenario, a lot of people choose to have the proxy terminate the TLS connection, so requests are forwarded to the backend over HTTP (but on a private network, so still relatively secure). In that case, you need to do two things:
Ensure the proxy is configured to forward the X-Forwarded-Proto header, which informs the backend which protocol was used originally request
Tell express to trust what the proxy is saying by configuring the trust proxy setting
I did a write up of this proxy issue a while back. It's for Keystone 5 (so some of the details are off) but, if you're using a reverse proxy, most of it's still relevant.
Update
From Simons comment, the above guesses missed the mark 😭 but I'll leave them here in case they help others.
Since posting about this issue a month ago I was actually able to work around it by routing API requests via a relative path like '/api/graphql' and then forwarding that request to the real API on a separate subdomain. For some mysterious reason it works this way.
This is starting to sound like a CORS or issue
If you want to serve your front end from a different origin (domain) than the API, the API needs to return a specific header to allow this. Read up on CORS and the Access-Control-Allow-Origin header. You can configure this setting the cors option in the Keystone server config which Keystone uses to configure the cors package.
Alternatively, the solution of proxying API requests via the Next app should also work. It's not obvious to me why your proxying "workaround" is experiencing problems.

browser not sending cookie to server from local host while sending from another origin

I have an issue i don't understand.
I make an api call from a.b.com to a.b.com
In devtools I can see the request and I can see it contain cookie as expected.
Then I make the same api call from my local host to a.b.com and the cookie is not present.
As per my knowledge and online documentation search, cookie should be sent to server if it matches all its rules (domain, path, expires, etc.)
If so why the request is different for each origin?
We use CORS calls all the time.
In addition just to verify, I disabled Chrome 3rd party cookie protection.
Here is an image to provide more details:
Don't be shy to point me to good documentation on this matter :)
Due to security reasons you cannot share cookies between two different domains. You cannot exchange cookies between localhost and a.b.com.

CookieManager.check.cookies=false not working

I jmeter.properties I set "CookieManager.check.cookies=false" but cross domain cookies still aren't working.
For example going this guide and using their demo site setting a cookie with a domain of "blazedemo.com" works, but if I change the domain to anything else it fails.
JMeter sends only cookies that match the domain of server in the request.
The property you've set impact the way JMeter read cookies not the way it writes them.
To check, emit a http request towards one host for which you created the cookie, you'll see it works.

The procedure of Opening a website using IE8

I want to know when I'm using IE8 open a website (like www.yahoo.com), which API will be called by IE8? so I can hook these API to capture which website that IE8 opening currently.
When you enter a URL into the browser, the browser (usually) makes an HTTP request to the server identified by the URL. To make the request, the IP address of the server is required, which is obtained by a DNS lookup of the host (domain) name.
Once the response -- usually containing HTML markup -- is received, the browser renders it to display the webpage.
More details available here: what happens when you type in a URL in browser
So, in the general case, no "API" request as such is made. (Technically speaking, you can think of the original HTTP request to the server as an API request). The sort of "API" request you presumably mean, however, is not made in this general case just described. Those requests happens when the JavaScript executing on the page makes an Ajax HTTP request (XmlHttpRequest) to the web server to carry out some operation.
I am not sure about IE8, but the "developer tools" feature of most modern browsers (including IE9 and IE10), would let you see the Ajax HTTP requests that the webpage made as it carried out different operations.
Hope this helps.
IE uses Microsoft's WinSock library API to interact with web servers.
You may want to look for a network monitoring/sniffing API, which you could use to examine HTTP requests, and determine the URLs the browser is using.

Django+apache: HTTPS only for login page

I'm trying to accomplish the following behaviour:
When the user access to the site by means of:
http://example.com/
I want him to be redirected to:
https://example.com/
By middleware, if user is not logged in, the login template is rendered when accessing /. If the user is logged, / is the main view. When the user logs in, I want the site working by http.
To do so, I am running the same server on ports 80 and 443 (is this really necessary? I have the impression that i'm running two separate servers with the same application while I want a server listening to two ports).
When the user navigates away from login, due to the redirection to http server the data in request.session is not present (altough it is present on https), thus showing that there is no user logged. So, considering the set up of apache is correct (running the same server on two different ports) I guess I have to pass the cookie from the server running on https over to http.
Can anybody shed some light on this? Thank you
First off make sure that the setting SESSION_COOKIE_SECURE is set to false. As long as the domains are the same the cookies on the browser should be present and so the session information should still be there.
Take a look at your cookies using a plugin. Search for the session cookie you have set. By default these cookies are named "sessionid" by Django. Make sure the domains and paths are in fact correct for both the secure session and regular session.
I want to warn against this however. Recently things like Firesheep have exploited an issue that people have known but ignored for a long time, that these cookies are not secure in any way. It would be easy for someone to "sniff" the cookie over the HTTP connection and gain access to the site as your logged in user. This essentially eliminates the entire reason you set up a secure connection to log in in the first place.
Is there a reason you don't have a secure connection across the entire site? Traditional arguments about it being more intensive on the server really don't apply with modern CPUs any longer and the exploits that I refer to above are becoming so prevalent that the marginal (really marginal) cost of encrypting all of your traffic is well worth it.
Apache needs to have essentially 2 different servers running because a.) it is listening on 2 different ports and b.) one is adding some additional encryption logic. That said this is a normal thing for Apache. I run servers with dozens of "servers" running on different ports and doing different logic. In the grand scheme of things, this shouldn't really weight your server down.
That said once you pass the same request to *WSGI or mod_python, you will then have to have logic to make sure that no one tries to log in over your non-encrypted connection because the only difference to Django will be the response in request.is_secure(). All the URLs and views in your urlconf will be accessible.
Whew that is a lot. I hope that helps.