Postman and multiple Client Certificates for a single domain? - postman

I've been using Fiddler for a couple of weeks to test an API but we're moving to Postman.
Our APIs workflow is that a device must register by using a common cert and as a response to a successful registration a private cert is issued to that device. All requests the device makes after that uses the private cert.
I'm trying to test multiple devices which means I need Postman to use 5 or 6 certs for a single domain. In Fiddler I could modify the fiddlerscript so I had an array of all the certs I intended to use. If I wanted to switch certs I opened the script and used a different index of my cert array. I'd set oSession["https-Client-Certificate"] and the request would use that cert.
In postman, I see that I can set a client cert for a particular domain. I've been able to get that to work for the global cert and running a /register request successfully. I can then change the cert and keep going. It's an annoying process if I want to change this cert after every request as I emulate multiple devices each with their own cert.
I see there's a Pre-Request Script tab. Is there a way to change the client cert in this script? If not with a Pre-Request Script, is there any other place where I can have multiple certs for a single domain and easily switch between them between requests?

I don't think that's possible, but maybe you can trick it by updating your local hosts file by creating fake local domains
104.244.42.130 cert1.api.twitter.com
104.244.42.130 cert2.api.twitter.com
104.244.42.130 cert3.api.twitter.com
104.244.42.130 cert4.api.twitter.com
104.244.42.130 cert5.api.twitter.com
Then map each local domain in postman to each certificate
cert1.api.twitter.com
cert2.api.twitter.com
cert3.api.twitter.com
cert4.api.twitter.com
cert5.api.twitter.com
and create an environment for each certificate and update the url of each request to include the {{cert}} environment. Then by switching environments you should be switching the certificate at the same time.

Related

Allowing users to point their domain names to my service

I am giving my services to my users via client.example.com and there are pages like
client.mysite.com/blog
client.mysite.com/blog/content/
client.mysite.com/docs/
etc.
I want to allow users to allow their domains to point to this subdomain.
so they can choose between any of the 1 option below :
client.com -> client.example.com
sub.client.com -> client.example.com
client.com/sub/ -> client.example.com
and pages should work automatically like
client.com/blog -> client.example.com/blog
sub.client.com/blog -> client.example.com/blog
client.com/sub/blog -> client.example.com/blog
Also, I use Elastic Beanstalk in Amazon to deploy my React application with nginx (Docker image ). Before I start I want to know if this is possible.I also don't want to give fixed IP address to my clients, just in case if I lose that IP. How are the big players like blogger.com, wordpress.com etc doing it?
As far as I researched I know cname is possible to allow clients subdomains and we need IP address for named domain. nowhere it mentioned about the folder. And for SSL, I can use LetsEncrypt.
I am OK with anything like CloudFlare / Route53 method.
Cloudflare for SaaS is designed for this use case. You would just go to Cloudflare Dashboard > You Domain (example.com) -> SSL -> Custom Hostnames. Add a fallback hostname to which you client will link to, e.g. ssl.example.com.
Then client then would need to add his or her custom hostname in your app, then link and verify his custom domain by adding a CNAME (pointing to ssl.example.com) and TXT record via his own DNS provider. The verification and issuing a new SSL would take a few minutes, completely handled by Cloudflare and from there on, your clients may access your service via custom hostname (e.g. client.com, sub.client.com, client.com/blog etc.)
If you need to manipulate the HTTP response as it goes through the customer's hostname, it's also possible to route these request through a CLoudflare Worker script (linked to */* — all hostnames/URLs).
Here is an example, of how to create a custom hostname programmatically:
import * as Cloudlfare from "cloudflare-client";
// Initialize custom hostnames client for Cloudlfare
const customHostnames = Cloudflare.customHostnames({
zoneId: process.env.CLOUDFLARE_ZONE_ID,
accessToken: process.env.CLOUDFLARE_API_TOKEN,
});
// Add the client's custom hostname record to Cloudflare
const record = await customHostnames.create(
hostname: "www.client.com",
ssl: {
method: "txt",
type: "dv",
settings: {
min_tls_version: "1.0",
},
}
);
// Fetch the status of the custom hostname
const status = await customHostnames.get(record.id);
// => { id: "xxx", status: "pending", ... } including TXT records
Pricing
CF for SaaS is free for 100 hostnames and $0.10/hostname/mo after (source).
Path-based URL forwarding
If you need to forward HTTP traffic to different endpoints, e.g. www.client.com/* (customer domain) to web.example.com (SaaS endpoint); www.client.com/blog/* to blog.example.com, etc.
You can achieve that by creating a Cloudfalre Worker script with a route handling */* requests (all customer hostnames, and all URL paths), that would look similar to this:
export default {
fetch(req, env, ctx) {
const url = new URL(req.url);
const { pathname, search } = url;
if (url.pathname === "/blog" || url.pathname.startsWith("/blog/")) {
return fetch(`https://blog.example.com${pathname}${search}`, req);
}
return fetch(`https://web.example.com${pathname}${search}`;
}
}
References
https://developers.cloudflare.com/cloudflare-for-saas/
https://github.com/kriasoft/cloudflare-client
https://github.com/kriasoft/cloudflare-starter-kit
The simplest approach to this, which I’ve implemented at scale (10,000+ clients), is to:
DNS
Have your clients create a CNAME record to either a specific client.example.com or general clients.example.com. This applies to both root (set the ALIAS record instead) and subdomains—an IP address is not required, nor recommended as it does not scale.
Create a database entry that registers/links that explicitly links their domain/subdomain to their account.
Have logic in the backend controller will that associates the hostname in the request to a specific client (security measure) to serve relevant content.
The above fulfills the first two use cases—it allows the client to link a root domain or subdomain to your service.
To achieve the third use case, you could allow the client to specify an arbitrary root path for your service to run within. If the client chooses this, you also need to handle redirects to other services that they have on their domain. This is a lot of responsibility for your app, however.
You could just leverage their registrar, most registrars have the ability to do path redirects—this is the simplest approach that requires the least amount of responsibility/maintenance on your end.
I also recommend having an option for redirecting all entry points (ie: root domain, subdomain, root domain+path, subdomain+path) to a primary entry point (ie: root domain + path), for SEO purposes.
Note: You may also use a service, such as wwwizer, that redirects to the www specified if the ALIAS option on the root domain of your client isn't available.
SSL
I recommend using Let's Encrypt's ACMEv2 API to enable SSL for any domain that is setup on your service. There are libraries available that you should be able.
What is worth mentioning is the challenge — which can occur via DNS or HTTP. When the library you decide on makes a request for a new certificate Let's Encrypt will respond by making a request to ensure that you control the domain (by checking DNS record for a predetermined unique hash) or that you control the server it points to (by checking HTTP path for a predetermined unique hash). What that means is that you need to ensure that your client either includes a hash in their DNS that you specify or you expose a route that adheres to the ACME v2 challenge response protocol (handled by the library you choose).
Here is an example library that you could use if the service you are building is based on python, which supports all features mentioned above and also includes a cli: https://github.com/komuw/sewer .
References
https://help.ns1.com/hc/en-us/articles/360017511293-What-is-the-difference-between-CNAME-and-ALIAS-records-
https://letsencrypt.org/docs/client-options/
https://datatracker.ietf.org/doc/html/rfc8555
http://wwwizer.com
I think what you are asking is how to have the client have their own domain, which they own, have a subdomain which points to your website, with a matching certificate somehow. But just from a security standpoint there is no safe way to do this IMO. Here are the ways off the top of my head you can do this:
You Manage DNS of Subdomain
Your customer could create a NS record to a public Route53 distribution which you own. Then that subdomain would effectively be yours, and you can create DNS records in their subdomain, and certificates in ACM which you can use in a Cloudfront Distribution. You can have multiple FQDN in the one certificate so you won't have to have multiple distributions. You will need to add the domains to the aliases in your distribution as well.
Client Manages own DNS but you tell them what to do
I'm not sure this works as I haven't tried it, but in the DNS validation of a ACM certificate you can see the records you need to create, you would tell the client the CNAME record which they need to create for AWS to issue a certificate for the given domain i.e. sub.client.com, and they would also need to create CNAMEs to point to your website. This way the client still manages their own DNS.
Import Certificate
You could get your client to create a cert for you can then you could import it. They would also need to create the CNAME records to point to your website. And again they would need to create a CNAME to point to your site. Probably least secure, and certs will require manual rotation.
Cloudfront
Your client can use your website as an origin in their own Cloudfront Distribution, bit hacky but would work. I don't think this scales with growing customers.
Summary
IMO I don't like any of those solutions, they are all messy, most likely non-compliant with security standards, and hard to automate if at all. Not to say there aren't situations where you might do this. I would suggest you create subdomains for your own domain, or otherwise consider yourself a hosting company and then you would own/manage your client's domains on their behalf then this easier. But IMO it does not make sense to own a domain, and then give out control to it, or transfer certificates. You'll run into trouble with customers who need to hold certain certifications, or need you to hold those, such as SOC2 for example.

Serve TLS certificate dynamically per Django view instead of via nginx/gunicorn

I'm using Django's request.get_host() in the view code to differentiate between a dynamic number of domains.
For example, if a request comes from www.domaina.com, that domain is looked up in a table and content related to it is returned.
I'm running certbot programmatically to generate the LetsEncrypt certificate (including acme challenge via Django). I store the cert files as base64 strings in PostgreSQL.
That works perfectly fine, but I can't figure out how to 'apply' the certificate on a dynamic per-domain basis.
I know that this is normally done using TLS termination, nginx or even in gunicorn. But that's not dynamic enough for my use-case.
The same goes for wildcard or SAN certificates (not dynamic enough)
So the question is:
Given I have valid LetsEncrypt certs, can I use them to secure Django views at runtime?
Django works as a wsgi server. Django gets an http request and does some work of its own. Then it hands it over to middleware and then to your views.
I'm fairly certain that the generic work django does at the start, that it already requires a regular http request, not a "binary blob of unreadable encrypted stuff".
Perhaps gunicorn can handle https termination, but I'm not sure.
Normally, nginx or haproxy is used. Also because it is something that needs to be really secure.
I'm using haproxy now, which has a handy feature that you can just point it at a directory full of *.pem certificate files and it will read them and use them. So if you could write the certs to such a dir and make sure haproxy is reloaded every time a certificate gets changed, you could be pretty close to a dynamic way of working.

Sandbox Cookies between environments

I have a production environment and a staging environment. I am wondering if I can sandbox cookies between the environments. My setup looks like
Production
domain.com - frontend SPA
api.domain.com - backend Node
Staging
staging.domain.com - frontend SPA
api.staging.domain.com - backend Node
My staging cookies use the domain .staging.domain.com so everything is fine there. But my production cookies use the domain .domain.com so these cookies show up in the staging environment.
I've read one possible solution is to use a separate domain for staging like staging-domain.com but I would like to avoid this if possible. Are there any other solutions or am I missing something about how cookies work?
There are multiple alternatives:
Set your production domains to be www.domain.com and api.www.domain.com and set your cookie to .www.domain.com
This way, your production cookie will not be seen in the staging environment.
or
Use .domain.com , but have your backend behave differently depending on which environment they receive the cookie in.
One solution would be to change the pass phrase used on staging environment to encrypt cookies.
Doing so will render cookies coming from the production invalid.
The method to do so is web server dependent, for example on Apache HTTP server:
http://httpd.apache.org/docs/current/mod/mod_session_crypto.html
Text from above link:
SessionCryptoPassphrase secret
The session will be encrypted with the given key. Different servers can be configured to share sessions by ensuring the same encryption key is used on each server.
If the encryption key is changed, sessions will be invalidated automatically.
So find how o change the passphrase on your web server on staging environment, and all cookies coming from production, along with all cookies (issued in the past) from staging will be considered invalid on staging.
Alternative option if you don't want to use separate domain or www subdomain: you can append staging environment name to the cookie name.
But personally, I would put an API gateway/proxy in front of backend and spa to keep both services under a single domain (domain.com and domain.com/api).
For staging: staging.domain.com and staging.domain.com/api or completely separate domain to avoid exposing a staging address in SSL certificate.
And I would not allow cookie sharing by omitting domain while setting the cookie. Probably, I would set the cookie path to /api.

AWS Cloudfront Signed Cookie Local Setup

I've been attempting to get cloudfront signed cookies setup for a site to make getting HLS manifest segment files easier to authenticate. Setting up the cloudfront origin and code in a live environment seems simple enough looking at resources like
https://mnm.at/markus/2015/04/05/serving-private-content-through-cloudfront-using-signed-cookies/
http://www.spacevatican.org/2015/5/1/using-cloudfront-signed-cookies/
What I'm trying to figure out is if it's possible to to have this working in a local environment (localhost) prior to deploying the initial solution. Cloudfront itself will forward to the live origin which will set the cookies for cloudfront and continue on as normal, but since the code isn't live this will not work until deployed.
Seems like a chicken and egg problem here where I need it live to use it, but can not test it (with code or manually) without it deployed.
Any thoughts here?
You'd not be able to test/run it properly on your localhost. When you try to set cookies for your CloudFront URL, you'll encounter cross domain issue. I'd recommend you to try generate signed URL first. If signed URL works, that means you're on the right direction. Setting up a cookie cannot go wrong as long as you've properly set CNAME in the CloudFront Web distribution and CloudFront URL records are set within your domain provider.

how to retrieve a ssl certificate in django?

Is it possible to retrieve the client's SSL certificate from the current connection in Django?
I don't see the certificate in the request context passed from the lighttpd.
My setup has lighttpd and django working in fastcgi mode.
Currently, I am forced to manually connect back to the client's IP to verify the certificate..
Is there a clever technique to avoid this? Thanks!
Update:
I added these lines to my lighttpd.conf:
ssl.verifyclient.exportcert = "enable"
setenv.add-request-header = (
"SSL_CLIENT_CERT" => env.SSL_CLIENT_CERT
)
Unfortunately, the env.SSL_CLIENT_CERT fails to dereference (does not exist?) and lighttpd fails to start.
If I replace the "env.SSL_CLIENT_CERT" with a static value like "1", it is successfully passed to django in the request.META fields.
Anything else, I could try? This is lighttpd 1.4.29.
Yes. Though this question is not Django specific.
Usually web servers have option to export SSL client-side certificate data as environment variables or HTTP headers. I have done this myself with Apache (not Lighttpd).
This is how I did it
On Apache, export SSL certificate data to environment variables
Then, add a new HTTP request headers containing these environment variables
Read headers in Python code
http://redmine.lighttpd.net/projects/1/wiki/Docs_SSL
Looks like the option name is ssl.verifyclient.exportcert.
Though I am not sure how to do step 2 with lighttpd, as I have little experience on it.