using custom domain for a django page through traefik - django

i have my first custom domain (its through godaddy)
ive hooked it up to cloudflare.
i want to connect to it with traefik.
i have a django webpage that works fine on port 8000, so i switched it over to 80 and no dice. trying to connect to my custom domain just hangs and the port gives me a 404 error.
traefik dashboard looks fine and so do my records on cloudflare (as far as i can tell ive never done this before)
i was hoping someone could help me connect to my django page through my custom domain. is there anything ive done in the evidence provided below that looks wrong?
is there anything else you would need to see?
or any steps ive missed?
i recieve this error from traefik as the docker container starts
traefik2 | time="2023-02-13T14:08:29Z" level=error msg="Unable to obtain ACME certificate for domains \"tgmjack.com\": unable to generate a certificate for the domains [tgmjack.com]: error: one or more domains had a problem:\n[tgmjack.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: 2606:4700:3033::ac43:a864: Invalid response from http://tgmjack.com/.well-known/acme-challenge/PnsiuL5AtrJXM9UQNrLvhlGdm1MpJ8ZS6i_atIVWCA4: \"<!doctype html><html lang=\\\"en\\\"><head><meta http-equiv=\\\"content-type\\\" content=\\\"text/html;charset=utf-8\\\" /><meta name=\\\"viewport\\\" c\"\n" providerName=myhttpchallenge.acme ACME CA="https://acme-v02.api.letsencrypt.org/directory" routerName=frontend#docker rule="Host(`tgmjack.com`)"
according to chatgpt
required file is an ACME challenge file and it should be present at the URL specified in the log message: "http://tgmjack.com/.well-known/acme-challenge/qC1w4L8-pPVgXvXmWm55u6ETasZWK2iCqJUfZNArY5U".
investigating, i belive the following few lines from my cmd line show that the only file on my computer called acme.json is here.
[ec2-user#ip-172-31-19-18 letsencrypt]$ sudo find / -name "acme.json"
/home/ec2-user/thing4/new_ui_51_fix_backend_for_8081/running_prices/TRAEFIK/letsencrypt/acme.json
and also there is no "acme-challenge" anywhere.
so is TRAEFIK/letsencrypt/acme.json the correct file? because the path looks miles away from what it should be? i didnt make it.
#####################################
extra info below
#################################
below is a collection of screenshots each thing ive stated above
do you have any advice or questions?
ps:)
this happens on my local machine and on amazon-linux ec2 containers, i have all my ports open (on the aws end of things)

Some considerations:
GoDaddy's pointing is ignored if you are using Cloudflare for your DNS, so we only look at Cloudflare's.
On CloudFlare you need to remove that random ip you found set as the A record.
You don't need to change your container port from 8000 to 80, you'll have to manage with an "ingress" or otherwise a webserver (nginx for example) a proxypass to localhost:8000.
Traefik,p probabbly, already has an "ingress" used to provisoning the certificate, which is why it returns you an error on ".well-known/acme-challenge". This file is used to identify the actual ownership of a domain and this is needed to generate a valid SSL/TLS certificate.
To do this you need to make sure that when you call your server at localhost:8000/.well-know/acme-challenge it returns the file with the unique key. You certainly find this information on Traefik (https://doc.traefik.io/traefik/https/acme/) this a link to the tutorial.
I recommend you to start checking the correct configuration of CloudFlare targeting by removing anything that is not useful to you.
I hope I have been of some help to you!

Related

Incorrect validation on ssl

I was trying to set up ssl using certbot. My webserver is nginx. when I run the command "sudo ./certbot-auto certonly" I enter my domain, which I purchased using netfirms. The domain is pointed to my amazon ec2 instance( public ip). I get this error " Type: unauthorized Detail: Incorrect validation certificate for TLS-SNI-01 challenge." Why is this happening?
I'm assuming it's the apache plugin that you are using.
The way the apache plugin works is that it adds a temporary with a "fake" certificate and SNI hostname that solves the TLS-SNI-01 challenge. Since this server has multiple IP addresses, I'm not certain if the apache plugin is capable of determining the correct IP address to listen on for this temporary . I haven't seen any success stories that explicitly mention this scenario, at least.
Your best bet might be to switch to the webroot plugin, which works by writing files to your existing DocumentRoot. If you'd like to continue using the automatic apache configuration while using the webroot authenticator, try something like this:
./certbot-auto --authenticator webroot --installer apache -w /var/www/html -d example.com
I had a similar problem - only when trying to update an existing key.
What I noticed was that the validation error said the it found a certificate that had all the other domain names in it that I had already requested in the certificate before.
Why does the validator see the previous certificate?
From the logs it seems to set up a new VirtualHost for each domain in the new cert in order to verify that the server is the one pointed to by the DNS. Validation requests to these mini VirtualHosts are not working correctly if it is seeing the existing cert with every domain in it - I though "my virtualhosts set up is somehow causing a problem!"
I thought maybe because I have a wildcard in my virtualHost it is somehow getting picked up before the mini temporary VirtualHosts.
I had named my existing hosts with 3 digit numeric prefixes so that I could carefully order them given that Apache said it processes .conf files in alphabetical order. This would mean they would get processed BEFORE any other .conf files starting with a letter.
I renamed my .conf files by adding a 'c' prefix before the number and now it appears at though it's working because it got passed the verification phase at least now - except now I have exceeded by 20 key requests for the week so I can't complete the process just yet!! Doh!

DNS_PROBE_FINISHED_NXDOMAIN for single website

I created this question earlier but was told that it is a DNS issue as apposed to an issue with HSTS. Regardless, here is what I need help troubleshooting:
Issue:
A single site (one that I own), is showing server DNS address could not be found. DNS_PROBE_FINISHED_NXDOMAIN when I try to connect to it via chrome, firefox, or safari. I can however connect to it via Tor Browser. I can also verify that the address resolves correctly using mxtoolbox. I also am not able to connect via two other computers and two other phones. I also am not able to connect via a different WIFI connection or personal hotspot via my phone. Curl and Host via the command line are also not able to get a response.
What I've tried:
As I said above, I've tried different internet connections and computers. I've also tried flushing my DNS cache and pointing to another DNS server.
Having said that, I am not sure how else to trouble shoot this. The only change I made to the web app was to add HSTS headers, hence why I created the earlier posing. Please let me know what other information I can provide. Otherwise, here are some details about the site itself:
Other information about my stack:
Django web app
Gunicorn / WSGI server
Hosted on Heroku - Cedar-14 stack
DNS setup with AWS route53
domain name registered through AWS
EDIT:
Possibly related: https://serverfault.com/questions/606880/how-can-i-troubleshoot-a-route-53-hosted-zone
I had the similar issue and was not able to open Facebook. Rest all sites were working fine. Initially, I thought Facebook blocked me as I never faced this crappy issue earlier. Later when I searched in Google, I found an article which described the DNS_PROBE_FINISHED_NXDOMAIN issue on Chrome.
I just changed my DNS server address as 8.8.8.8 (preferred) and 8.8.4.4 (alternate) and I never faced that issue again.
Reference - https://www.mobipicker.com/dns_probe_finished_nxdomain/
So from our discussion regarding the NS server records always make sure that the local NS records matches the Parent NS records.
In your case there there were 2 extra NS records associated with your domain that was the reason why your domains and sub domains were acting unhealthy. once you deleted those records the domains and sub domains were back to normal.
you can also try to open an anon window
access the url
use it in anon mode
or
close it and it will load ok

Google: Permission denied to generate login hint for target domain NOT on localhost

I am trying to create a Google sign-in and getting the error:
Permission denied to generate login hint for target domain
Before you mark this a duplicate, this is not the same as the question asked at Google sign in website Error : Permission denied to generate login hint for target domain because in that case the questioner was on localhost, whereas I am getting this error on the server.
Specifically, I have included the url of the server in the Authorized Javascript Origins, as in the following image:
and when I get the error, the request shows that the same url was sent, as in the following image:
Is there something else I should be putting in my Restrictions page? Is there any way to figure out what is going on here? Is there a log at the developer console that can tell me what is happening?
Okay, I figured this out. I was using an IP address (as in "http://175.132.64.120") for the redirect uri, as this was a test site on the live server, and Google only accepts actual urls (as in "http://mycompany.com" or "http://localhost") as redirect uris.
Which, you know, THEY COULD HAVE SAID SOMEWHERE IN THE DOCUMENTATION, but whatever.
I know this is an old question, but it's the first result when you look for the problem via Google, so I'll share my solution with you guys.
When deploying Google OAuth service in a private network, namely some IP that can't be accessed via the Internet, you should use a magic DNS service, like xip.io that will give you an URL that your browser will resolve to your internal IP. You see, Google needs to be able to reach your authorized origin via your browser, that's why setting localhost works if you're serving it on your computer, but it won't work when you're deploying outside the Internet, as in a VPN, intranet, or with a tunnel.
So, the steps:
get your IP address, the one you're deploying at and it's not a public domain, let's say it's 10.0.0.1 as an example.
add http://10.0.0.1.xip.io to your Authorized Javascript Origins on the Google Developer Console.
open your site by visiting http://10.0.0.1.xip.io
clear your cache for the site, if necessary.
Log in with Google, and voilĂ .
I got to this solution using this answer in another question.
If you are using http://127.0.0.1/projects/testplateform, change it into http://localhost/projects/testplateform, it will work just fine.
If you testing in your machine (locally). then dont use the IP address (i.e. http://127.0.0.1:8888) in the Client ID configuration , but use the local host instead and it should work
Example: http://localhost:8888
To allow ip address to be used as valid javascript origin, first add an entry in your /etc/hosts file
10.0.0.1 mydevserver.com
and then add this domain mydeveserver.com in Authorized Javascript Origins. If you are using some nonstandard port, then specify it with your domain in Authorized Javascript Origins.
Note: Remove your cache and it will work.
Just ran across this same issue on an external test server, without a DNS entry yet. If you have permission on your local machine just edit your /etc/hosts file:
175.132.64.120 www.jimboweb.com
And use use http://www.jimboweb.com as an authorized domain.
I have a server in private net, ip 172.16.X.X
The problem was solved with app port ssh-forwarding to my localhost port.
Now I am able to use deployed app with google oauth browsing to localhost.
ssh -N -L8081:localhost:8080 ${user}#${host}
I also add localhost:8081 to "Authorized URI redirect" and "Authorized JavaScript sources" in console.developers.google.com:
google developers console
After battling with it for a few hours, I found out that my config in the Google Cloud console was all correct and similar to the answers provided. Due to caching issues or something, I had to recreate a OAuth Client ID and then it suddenly started working.
Its a pretty old issue, but I encountered it and there wasn't any helpful resource, as such I am posting my solution.
For me the issue was when I hosted my web-app locally, a using google-auth for logging in.
The URL I was trying to hit was :- http://127.0.0.1:8000/master
I just changed from IP to http://localhost:8000/master/
And it worked. I was able to log in to the website using Google Auth.
Hope this helps someone someday.
install xampp and run apache server,
put your files (index and co) in a folder in the xampp dir (c:\xampp\htdocs\yourfolder).
Type this in your browser url - http://localhost/yourfolder/index.html

ShimmerCat with reverse proxy when using "the old way"

I have used ShimmerCat with sc-tool to connect to my development sites as described here, and everything has worked always like a charm with it, but I also wanted to follow the "old way" configuring my /etc/hosts. In this case I had a small problem, the server ran ok, and I could access to my development site (let's say that I used https://www.example.com:4043/), but I'm also using a reverse proxy as described on this article, and on the config file reference. It redirects to a Django app I'm using. Let's say it is my devlove.yaml config file:
---
shimmercat-devlove:
domains:
www.example.com:
root-dir: site
consultant: 8080
cache-key: xxxxxxx
api.example.com:
port: 8080
The problem is that when I try to access to a URL that requests the API, a 404 response is sent from the API. Let me try to explain it through an example. I try to access to https://www.example.com:4043/country/, and on this page I do a request to the API: /api/<country>/towns/, then the API endpoint is returning a 404 response so it is not finding this URL, which does not happen when using Google Chrome with sc-tool. I had set both domains www.example.com, and api.example.com on my /etc/hosts. I have been trying to solve it, but without any luck, is there something I'm missing? Any help will be welcome. Thanks in advance.
With a bit more of data, we may be able to find the issue. In the meantime, here is a list of troubleshooting tips:
Possible issue: DNS is cached in browser, /etc/hosts is not being used (yet)
This can happen if somehow your browser has not done a DNS lookup since before you changed your /etc/hosts file. Then the connection is going to a domain in the Internet that may not have the API endpoint that you are calling.
Troubleshooting: Check ShimmerCat's log for the requests. If this is the issue, closing and opening the browser may solve the issue.
Possible issue: the host header is incorrect
ShimmerCat uses the Host header in HTTP/1.1 requests and the :authority header in HTTP/2 requests to distinguish the domains. It always discards any port number present in them. If these headers are not set or are set to a domain other than the ones ShimmerCat is configured to listen, the server will consider the situation so despicable that it will just close the connection.
Troubleshooting: This is not a 404 error, but a connection close (if trying to connect un-proxied, directly to the SSL port where ShimmerCat is listening), or a Socks Connection Failed (if trying to connect through ShimmerCat's built-in SOCKS5 proxy). In the former case, the server will print the message "Rejected request to Just https://some-domain-or-ip/some/path" in his log, using the actual value for the domain, or "Rejected request to Nothing", if no header was present. The second case is more complicated, because the SOCKS5 proxy is before the HTTP routing algorithm.
In any case, the browser will put a red line in the network panel of the developer tools. If you are accessing the server using curl, like this:
curl -k -H host:api.incorrect-domain.com https://127.0.0.1:4043/contents/blog/data-density/
or like
curl -k -H host:api.incorrect-domain.com
(notice the --http2 parameter in the second form), you will get a response:
curl: (56) Unexpected EOF
Extra-tip: There is a field for the network address in the browser's developer tools. Check it, it may tell you something!
Possible issue: something gets messed up when passing the request to the api back-end.
API backends are also sensitive to the host header, and to additional things like authentication cookies and request parameters.
Troubleshooting: A way to diagnose things is invoking ShimmerCat using the --show-proxied-headers command-line option. It makes ShimmerCat to report the proxied headers to the log:
Issuing request with headers :authority: api.example.com
:method: GET
:path: /my/api/endpoint/path/
:scheme: https
accept: */*
user-agent: curl/7.47.0
Possible issue: there are two instances or more of ShimmerCat running
...and they are using different configurations. ShimmerCat uses port sharing among several processes to increase availability. A downside of this is that is perfectly possible to mistakenly start ShimmerCat, forget about stopping it, and start it again after changing some configuration bit. The two instances will be running at the same time, and any of them will pick connections made to the listening port.
Troubleshooting: Shutdown all instances of ShimmerCat, then double-check there are none running by using the corresponding form of the ps command, and start the server with the configuration you want.

Connecting a DD-WRT router to a Squid proxy running on AWS

I am trying to get a Linksys router with the latest DD-WRT (v24-sp2) in my house connected, via Comcast, to an external Squid (v3) proxy that I am running on AWS. When I connect over the WiFi to the DD-WRT router, it connects to the Squid proxy, but I get the nasty message (abbreviated here to show relevant part):
While trying to retrieve the URL: /
Note the backlash. I get this when I go to a root domain, like www.cnn.com. If I go to a page under a site, like www.cnn.com/today (fake link used for example only), that returns and error like:
While trying to retrieve the URL: /today
Again, notice the "/today", as if the root domain has been removed, and the string to the right of the domain name is being searched on.
For some background, I have installed Squid as generally as possible, and have done it on two servers with the same results. I get this same error no matter what domain I go to. Also, if I switch my network on my Mac to use this Squid proxy, it works fine. Only the connections from the DD-WRT give this error.
I have tried the instructions on the DD-WRT site with no luck. Others seem to have gotten this working well, so I assume I am making a configuration mistake.
Any clues for me? TIA...