Incorrect validation on ssl - amazon-web-services

I was trying to set up ssl using certbot. My webserver is nginx. when I run the command "sudo ./certbot-auto certonly" I enter my domain, which I purchased using netfirms. The domain is pointed to my amazon ec2 instance( public ip). I get this error " Type: unauthorized Detail: Incorrect validation certificate for TLS-SNI-01 challenge." Why is this happening?

I'm assuming it's the apache plugin that you are using.
The way the apache plugin works is that it adds a temporary with a "fake" certificate and SNI hostname that solves the TLS-SNI-01 challenge. Since this server has multiple IP addresses, I'm not certain if the apache plugin is capable of determining the correct IP address to listen on for this temporary . I haven't seen any success stories that explicitly mention this scenario, at least.
Your best bet might be to switch to the webroot plugin, which works by writing files to your existing DocumentRoot. If you'd like to continue using the automatic apache configuration while using the webroot authenticator, try something like this:
./certbot-auto --authenticator webroot --installer apache -w /var/www/html -d example.com

I had a similar problem - only when trying to update an existing key.
What I noticed was that the validation error said the it found a certificate that had all the other domain names in it that I had already requested in the certificate before.
Why does the validator see the previous certificate?
From the logs it seems to set up a new VirtualHost for each domain in the new cert in order to verify that the server is the one pointed to by the DNS. Validation requests to these mini VirtualHosts are not working correctly if it is seeing the existing cert with every domain in it - I though "my virtualhosts set up is somehow causing a problem!"
I thought maybe because I have a wildcard in my virtualHost it is somehow getting picked up before the mini temporary VirtualHosts.
I had named my existing hosts with 3 digit numeric prefixes so that I could carefully order them given that Apache said it processes .conf files in alphabetical order. This would mean they would get processed BEFORE any other .conf files starting with a letter.
I renamed my .conf files by adding a 'c' prefix before the number and now it appears at though it's working because it got passed the verification phase at least now - except now I have exceeded by 20 key requests for the week so I can't complete the process just yet!! Doh!

Related

using custom domain for a django page through traefik

i have my first custom domain (its through godaddy)
ive hooked it up to cloudflare.
i want to connect to it with traefik.
i have a django webpage that works fine on port 8000, so i switched it over to 80 and no dice. trying to connect to my custom domain just hangs and the port gives me a 404 error.
traefik dashboard looks fine and so do my records on cloudflare (as far as i can tell ive never done this before)
i was hoping someone could help me connect to my django page through my custom domain. is there anything ive done in the evidence provided below that looks wrong?
is there anything else you would need to see?
or any steps ive missed?
i recieve this error from traefik as the docker container starts
traefik2 | time="2023-02-13T14:08:29Z" level=error msg="Unable to obtain ACME certificate for domains \"tgmjack.com\": unable to generate a certificate for the domains [tgmjack.com]: error: one or more domains had a problem:\n[tgmjack.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: 2606:4700:3033::ac43:a864: Invalid response from http://tgmjack.com/.well-known/acme-challenge/PnsiuL5AtrJXM9UQNrLvhlGdm1MpJ8ZS6i_atIVWCA4: \"<!doctype html><html lang=\\\"en\\\"><head><meta http-equiv=\\\"content-type\\\" content=\\\"text/html;charset=utf-8\\\" /><meta name=\\\"viewport\\\" c\"\n" providerName=myhttpchallenge.acme ACME CA="https://acme-v02.api.letsencrypt.org/directory" routerName=frontend#docker rule="Host(`tgmjack.com`)"
according to chatgpt
required file is an ACME challenge file and it should be present at the URL specified in the log message: "http://tgmjack.com/.well-known/acme-challenge/qC1w4L8-pPVgXvXmWm55u6ETasZWK2iCqJUfZNArY5U".
investigating, i belive the following few lines from my cmd line show that the only file on my computer called acme.json is here.
[ec2-user#ip-172-31-19-18 letsencrypt]$ sudo find / -name "acme.json"
/home/ec2-user/thing4/new_ui_51_fix_backend_for_8081/running_prices/TRAEFIK/letsencrypt/acme.json
and also there is no "acme-challenge" anywhere.
so is TRAEFIK/letsencrypt/acme.json the correct file? because the path looks miles away from what it should be? i didnt make it.
#####################################
extra info below
#################################
below is a collection of screenshots each thing ive stated above
do you have any advice or questions?
ps:)
this happens on my local machine and on amazon-linux ec2 containers, i have all my ports open (on the aws end of things)
Some considerations:
GoDaddy's pointing is ignored if you are using Cloudflare for your DNS, so we only look at Cloudflare's.
On CloudFlare you need to remove that random ip you found set as the A record.
You don't need to change your container port from 8000 to 80, you'll have to manage with an "ingress" or otherwise a webserver (nginx for example) a proxypass to localhost:8000.
Traefik,p probabbly, already has an "ingress" used to provisoning the certificate, which is why it returns you an error on ".well-known/acme-challenge". This file is used to identify the actual ownership of a domain and this is needed to generate a valid SSL/TLS certificate.
To do this you need to make sure that when you call your server at localhost:8000/.well-know/acme-challenge it returns the file with the unique key. You certainly find this information on Traefik (https://doc.traefik.io/traefik/https/acme/) this a link to the tutorial.
I recommend you to start checking the correct configuration of CloudFlare targeting by removing anything that is not useful to you.
I hope I have been of some help to you!

Serve TLS certificate dynamically per Django view instead of via nginx/gunicorn

I'm using Django's request.get_host() in the view code to differentiate between a dynamic number of domains.
For example, if a request comes from www.domaina.com, that domain is looked up in a table and content related to it is returned.
I'm running certbot programmatically to generate the LetsEncrypt certificate (including acme challenge via Django). I store the cert files as base64 strings in PostgreSQL.
That works perfectly fine, but I can't figure out how to 'apply' the certificate on a dynamic per-domain basis.
I know that this is normally done using TLS termination, nginx or even in gunicorn. But that's not dynamic enough for my use-case.
The same goes for wildcard or SAN certificates (not dynamic enough)
So the question is:
Given I have valid LetsEncrypt certs, can I use them to secure Django views at runtime?
Django works as a wsgi server. Django gets an http request and does some work of its own. Then it hands it over to middleware and then to your views.
I'm fairly certain that the generic work django does at the start, that it already requires a regular http request, not a "binary blob of unreadable encrypted stuff".
Perhaps gunicorn can handle https termination, but I'm not sure.
Normally, nginx or haproxy is used. Also because it is something that needs to be really secure.
I'm using haproxy now, which has a handy feature that you can just point it at a directory full of *.pem certificate files and it will read them and use them. So if you could write the certs to such a dir and make sure haproxy is reloaded every time a certificate gets changed, you could be pretty close to a dynamic way of working.

Why am I getting "Internal Server Error" running two Odoo instances (same domain but different ports)?

I have two instances of Odoo in a server in the cloud. If I make the following steps I get "Internal Server Error":
I make login in the first instance (http://111.222.33.44:3333)
I close the session
I load the address of the second instance in the same browser (http://111.222.33.44:4444)
If I want to work in the second instance (in another port), I need to remove the browser cookies first to acces to the other Odoo instance. If do this everything works fine.
If I load them in differents browsers (Firefox and Chromium) at the same time, they work well as well.
It's not a NginX issue because I tried with and without it.
Is there a way to solve this permanently? Is this the expected behaviour?
If you have access to the sourcecode you can change this file like shown below and check if the issue is solved or not.
addons/web/controllers/main.py
if db != request.session.db:
request.session.logout()
request.session.db = db
abort_and_redirect(request.httprequest.url)
And delete --> request.session.db = db
which is below this IF statement.
Try following changes in:
openerp/addons/base/ir/ir_http.py
In method _handle_exception somewhere around line 140 you will find this piece of code:
attach = self._serve_attachment()
if attach:
return attach
Replace it with:
if isinstance(exception, werkzeug.exceptions.HTTPException) and exception.code == 404:
attach = self._serve_attachment()
if attach:
return attach
You can perfectly well serve all the databases with a single OpenERP server on your machine. Unfortunately you did not mention what error you were seeing and what you expected as a result - makes it a bit harder to help you ;-)
Anyway, here are some random ideas based on the information you provided:
If you have a problem with OpenERP not listening on all interfaces, try to specify 0.0.0.0 as the xmlrpc_interface in the configuration file, this should have OpenERP listen on 8069 on all IPs.
Note that Apache is not relevant if you're connecting to e.g. http://www.sample.com:8069/?db=openerp because you're directly connecting to OpenERP. If you want to go through Apache, you need to setup ReverseProxy rules in your vhost configs, and OpenERP does not need to listen to all public IPs then.
OpenERP 6.1 and later can autodetect the database name based on the virtual host name, and filter the name of the available databases: you need to start it with the --db-filter parameter, which represents a pattern used to filter the list of available databases. %h represents the domain name and %d is the first domain component of that domain. So for example with --db-filter=^%d$ I will only see the test database if I end up on the server using http://test.example.com:8069. If there's only one database match, the list is not displayed and the user will directly end up on the right database. This works even behind Apache reverse proxies if you make sure that OpenERP see the external hostname, i.e. by setting a X-Forwarded-Host header in your Apache proxy config and enabling the --proxy mode of OpenERP.
The port reuse problem comes because you are trying to start multiple OpenERP servers on the same interface/port combination. This is simply not possible unless you are careful to start just one server per IP with the IP set in the xmlrpc_interface parameter, and I don't think you need that. The named-based virtual hosts that Apache supports are all handled by a single master process that listens on port 80 on all the interfaces. If you want to do the same with OpenERP you only need to start one OpenERP server for all your domains, and make it listen on 0.0.0.0, port 8069, as I explained above.
On top of that it's not clear what you would have set differently in the various config files. Running 40 different OpenERP servers on the same machine with identical code sounds like a lot of overkill. OpenERP is designed to be multi-tenant so that many (read: hundreds) of databases can be served from the same server.
Finally I think this is the expected behaviour. The cookies of all websites are stored specifically for each website (for each domain) in the web browser. So if I only change the port the cookies of the first instance are in conflict with the cookies of the other instance because the have the same domain (111.222.33.44 in my example).
So there are some workarounds:
Change Domain Locally
Creating a couple of domain name in my laptop in /etc/hosts:
111.222.33.44 cloud01
111.222.33.44 cloud02
Then the cookies don't interfere with each other anymore. To access to each instance
http://cloud01:3333
http://cloud02:4444
Broswer Extension. Multilogin or Multiaccount
There is another workaround. If I use this chromium extension the problem disappears because the sessions are treated separately:
SessionBox

How do I stop re-directs to another website?

I created my own local website(to run in my localhost) called http://testrb.com. But when I key that in a browser, it is redirecting to someone else's https://www.testrb.com. I want to prevent this and view my testrb.com. How do I do this? I am using apache webserver
I think the problem cames from the browser: since third level domain "www." browsers now a days are trying to add that domain to the URL. To solve that try to type all the address adding also http://.
If still not working you should try to use a local DNS: add the alias in your /etc/resolv.conf to the line starting with 127.0.0.1 appending the domain name testrb.com separated with a space(in UNIX systems).

how to retrieve a ssl certificate in django?

Is it possible to retrieve the client's SSL certificate from the current connection in Django?
I don't see the certificate in the request context passed from the lighttpd.
My setup has lighttpd and django working in fastcgi mode.
Currently, I am forced to manually connect back to the client's IP to verify the certificate..
Is there a clever technique to avoid this? Thanks!
Update:
I added these lines to my lighttpd.conf:
ssl.verifyclient.exportcert = "enable"
setenv.add-request-header = (
"SSL_CLIENT_CERT" => env.SSL_CLIENT_CERT
)
Unfortunately, the env.SSL_CLIENT_CERT fails to dereference (does not exist?) and lighttpd fails to start.
If I replace the "env.SSL_CLIENT_CERT" with a static value like "1", it is successfully passed to django in the request.META fields.
Anything else, I could try? This is lighttpd 1.4.29.
Yes. Though this question is not Django specific.
Usually web servers have option to export SSL client-side certificate data as environment variables or HTTP headers. I have done this myself with Apache (not Lighttpd).
This is how I did it
On Apache, export SSL certificate data to environment variables
Then, add a new HTTP request headers containing these environment variables
Read headers in Python code
http://redmine.lighttpd.net/projects/1/wiki/Docs_SSL
Looks like the option name is ssl.verifyclient.exportcert.
Though I am not sure how to do step 2 with lighttpd, as I have little experience on it.