i have my first custom domain (its through godaddy)
ive hooked it up to cloudflare.
i want to connect to it with traefik.
i have a django webpage that works fine on port 8000, so i switched it over to 80 and no dice. trying to connect to my custom domain just hangs and the port gives me a 404 error.
traefik dashboard looks fine and so do my records on cloudflare (as far as i can tell ive never done this before)
i was hoping someone could help me connect to my django page through my custom domain. is there anything ive done in the evidence provided below that looks wrong?
is there anything else you would need to see?
or any steps ive missed?
i recieve this error from traefik as the docker container starts
traefik2 | time="2023-02-13T14:08:29Z" level=error msg="Unable to obtain ACME certificate for domains \"tgmjack.com\": unable to generate a certificate for the domains [tgmjack.com]: error: one or more domains had a problem:\n[tgmjack.com] acme: error: 403 :: urn:ietf:params:acme:error:unauthorized :: 2606:4700:3033::ac43:a864: Invalid response from http://tgmjack.com/.well-known/acme-challenge/PnsiuL5AtrJXM9UQNrLvhlGdm1MpJ8ZS6i_atIVWCA4: \"<!doctype html><html lang=\\\"en\\\"><head><meta http-equiv=\\\"content-type\\\" content=\\\"text/html;charset=utf-8\\\" /><meta name=\\\"viewport\\\" c\"\n" providerName=myhttpchallenge.acme ACME CA="https://acme-v02.api.letsencrypt.org/directory" routerName=frontend#docker rule="Host(`tgmjack.com`)"
according to chatgpt
required file is an ACME challenge file and it should be present at the URL specified in the log message: "http://tgmjack.com/.well-known/acme-challenge/qC1w4L8-pPVgXvXmWm55u6ETasZWK2iCqJUfZNArY5U".
investigating, i belive the following few lines from my cmd line show that the only file on my computer called acme.json is here.
[ec2-user#ip-172-31-19-18 letsencrypt]$ sudo find / -name "acme.json"
/home/ec2-user/thing4/new_ui_51_fix_backend_for_8081/running_prices/TRAEFIK/letsencrypt/acme.json
and also there is no "acme-challenge" anywhere.
so is TRAEFIK/letsencrypt/acme.json the correct file? because the path looks miles away from what it should be? i didnt make it.
#####################################
extra info below
#################################
below is a collection of screenshots each thing ive stated above
do you have any advice or questions?
ps:)
this happens on my local machine and on amazon-linux ec2 containers, i have all my ports open (on the aws end of things)
Some considerations:
GoDaddy's pointing is ignored if you are using Cloudflare for your DNS, so we only look at Cloudflare's.
On CloudFlare you need to remove that random ip you found set as the A record.
You don't need to change your container port from 8000 to 80, you'll have to manage with an "ingress" or otherwise a webserver (nginx for example) a proxypass to localhost:8000.
Traefik,p probabbly, already has an "ingress" used to provisoning the certificate, which is why it returns you an error on ".well-known/acme-challenge". This file is used to identify the actual ownership of a domain and this is needed to generate a valid SSL/TLS certificate.
To do this you need to make sure that when you call your server at localhost:8000/.well-know/acme-challenge it returns the file with the unique key. You certainly find this information on Traefik (https://doc.traefik.io/traefik/https/acme/) this a link to the tutorial.
I recommend you to start checking the correct configuration of CloudFlare targeting by removing anything that is not useful to you.
I hope I have been of some help to you!
I want to use my Google Cloud Function as a webhook endpoint for a Telegram bot - so that Telegram server makes a request to my function every time there's an update that I need to reply to. (Here's a full guide they provide for this). I have set up such a webhook at a GCF provided address, which looks like https://us-central1-project-name-123456.cloudfunctions.net/processUpdate (where processUpdate is the name of my function).
However, it looks like Telegram doesn't work with my function because of a problem with certificate. They #CanOfWormsBot created to troubleshoot this provides an error message:
⛔️ This verified certificate appears to be invalid
https://us-central1-project-name-123456.cloudfunctions.net/processUpdate
Your CN (Common Name) or SAN (Subject Alternative Name) appear not to match your domain name, please verify you're setting the correct domain for the certificate.
CERTIFICATE:
Common Name(CN): misc.google.com
Issuer: Google Internet Authority G3
Alternative Names(SAN): Too many SANS to be shown here.
Issued: 18/06/2019
Expires: 10/09/2019
What's the root cause of this issue? Does it mean that Google misconfigured certificate they use for cloudfunctions.net? Can I fix this by configuring my cloud function?
I was trying to set up ssl using certbot. My webserver is nginx. when I run the command "sudo ./certbot-auto certonly" I enter my domain, which I purchased using netfirms. The domain is pointed to my amazon ec2 instance( public ip). I get this error " Type: unauthorized Detail: Incorrect validation certificate for TLS-SNI-01 challenge." Why is this happening?
I'm assuming it's the apache plugin that you are using.
The way the apache plugin works is that it adds a temporary with a "fake" certificate and SNI hostname that solves the TLS-SNI-01 challenge. Since this server has multiple IP addresses, I'm not certain if the apache plugin is capable of determining the correct IP address to listen on for this temporary . I haven't seen any success stories that explicitly mention this scenario, at least.
Your best bet might be to switch to the webroot plugin, which works by writing files to your existing DocumentRoot. If you'd like to continue using the automatic apache configuration while using the webroot authenticator, try something like this:
./certbot-auto --authenticator webroot --installer apache -w /var/www/html -d example.com
I had a similar problem - only when trying to update an existing key.
What I noticed was that the validation error said the it found a certificate that had all the other domain names in it that I had already requested in the certificate before.
Why does the validator see the previous certificate?
From the logs it seems to set up a new VirtualHost for each domain in the new cert in order to verify that the server is the one pointed to by the DNS. Validation requests to these mini VirtualHosts are not working correctly if it is seeing the existing cert with every domain in it - I though "my virtualhosts set up is somehow causing a problem!"
I thought maybe because I have a wildcard in my virtualHost it is somehow getting picked up before the mini temporary VirtualHosts.
I had named my existing hosts with 3 digit numeric prefixes so that I could carefully order them given that Apache said it processes .conf files in alphabetical order. This would mean they would get processed BEFORE any other .conf files starting with a letter.
I renamed my .conf files by adding a 'c' prefix before the number and now it appears at though it's working because it got passed the verification phase at least now - except now I have exceeded by 20 key requests for the week so I can't complete the process just yet!! Doh!
I am trying to create a Google sign-in and getting the error:
Permission denied to generate login hint for target domain
Before you mark this a duplicate, this is not the same as the question asked at Google sign in website Error : Permission denied to generate login hint for target domain because in that case the questioner was on localhost, whereas I am getting this error on the server.
Specifically, I have included the url of the server in the Authorized Javascript Origins, as in the following image:
and when I get the error, the request shows that the same url was sent, as in the following image:
Is there something else I should be putting in my Restrictions page? Is there any way to figure out what is going on here? Is there a log at the developer console that can tell me what is happening?
Okay, I figured this out. I was using an IP address (as in "http://175.132.64.120") for the redirect uri, as this was a test site on the live server, and Google only accepts actual urls (as in "http://mycompany.com" or "http://localhost") as redirect uris.
Which, you know, THEY COULD HAVE SAID SOMEWHERE IN THE DOCUMENTATION, but whatever.
I know this is an old question, but it's the first result when you look for the problem via Google, so I'll share my solution with you guys.
When deploying Google OAuth service in a private network, namely some IP that can't be accessed via the Internet, you should use a magic DNS service, like xip.io that will give you an URL that your browser will resolve to your internal IP. You see, Google needs to be able to reach your authorized origin via your browser, that's why setting localhost works if you're serving it on your computer, but it won't work when you're deploying outside the Internet, as in a VPN, intranet, or with a tunnel.
So, the steps:
get your IP address, the one you're deploying at and it's not a public domain, let's say it's 10.0.0.1 as an example.
add http://10.0.0.1.xip.io to your Authorized Javascript Origins on the Google Developer Console.
open your site by visiting http://10.0.0.1.xip.io
clear your cache for the site, if necessary.
Log in with Google, and voilà.
I got to this solution using this answer in another question.
If you are using http://127.0.0.1/projects/testplateform, change it into http://localhost/projects/testplateform, it will work just fine.
If you testing in your machine (locally). then dont use the IP address (i.e. http://127.0.0.1:8888) in the Client ID configuration , but use the local host instead and it should work
Example: http://localhost:8888
To allow ip address to be used as valid javascript origin, first add an entry in your /etc/hosts file
10.0.0.1 mydevserver.com
and then add this domain mydeveserver.com in Authorized Javascript Origins. If you are using some nonstandard port, then specify it with your domain in Authorized Javascript Origins.
Note: Remove your cache and it will work.
Just ran across this same issue on an external test server, without a DNS entry yet. If you have permission on your local machine just edit your /etc/hosts file:
175.132.64.120 www.jimboweb.com
And use use http://www.jimboweb.com as an authorized domain.
I have a server in private net, ip 172.16.X.X
The problem was solved with app port ssh-forwarding to my localhost port.
Now I am able to use deployed app with google oauth browsing to localhost.
ssh -N -L8081:localhost:8080 ${user}#${host}
I also add localhost:8081 to "Authorized URI redirect" and "Authorized JavaScript sources" in console.developers.google.com:
google developers console
After battling with it for a few hours, I found out that my config in the Google Cloud console was all correct and similar to the answers provided. Due to caching issues or something, I had to recreate a OAuth Client ID and then it suddenly started working.
Its a pretty old issue, but I encountered it and there wasn't any helpful resource, as such I am posting my solution.
For me the issue was when I hosted my web-app locally, a using google-auth for logging in.
The URL I was trying to hit was :- http://127.0.0.1:8000/master
I just changed from IP to http://localhost:8000/master/
And it worked. I was able to log in to the website using Google Auth.
Hope this helps someone someday.
install xampp and run apache server,
put your files (index and co) in a folder in the xampp dir (c:\xampp\htdocs\yourfolder).
Type this in your browser url - http://localhost/yourfolder/index.html
I wish to get a few web pages and the sub-links on those which are password protected. I have the user name and the password and can access them from the normal browser UI. But As I wish to save these pages to my local drive for later reference, I am using WGET to get them:
wget --http-user=USER --http-password=PASS http://mywiki.mydomain.com/myproject
But the above is not working, as it asks for the password again. Is there any better way to do this, without getting stuck with the system asking for the password again. Also, what is the best option to get all the links and sub-links on a particular page and store them to a single folder.
Update:
The actual page I am trying to access is behind a HTTPS gateway, and the certificate for the same is not gettin g validated. Is there any way to get through this?
mysystem-dsktp ~ $ wget --http-user=USER --http-password=PASS https://secure.site.mydomain.com/login?url=http://mywiki.mydomain.com%2fsite%2fmyproject%2f
--2010-01-24 18:09:21-- https://secure.site.mydomain.com/login?url=http://mywiki.mydomain.com%2fsite%2fmyproject%2f
Resolving secure.site.mydomain.com... 124.123.23.12, 124.123.23.267, 124.123.102.191, ...
Connecting to secure.site.mydomain.com|124.123.23.12|:443... connected.
ERROR: cannot verify secure.site.mydomain.com's certificate, issued by `/C=US/O=Equifax/OU=Equifax Secure Certificate Authority':
Unable to locally verify the issuer's authority.
To connect to secure.site.mydomain.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.
I tried the --no-check-certificate option also, it is not working. I only get the login page with this option and not the actual page I requested.
Could you try like this?
wget http://USER:PASSWD#mywiki.mydomain.com/myproject
Seems you're trying to access a page secured by a form.
You could to use that --no-check-certificate option and to follow this forum thread suggestions: Can't log in with wget.