Automatically certbot renew wildcard certificates on NameCheap - port 53 problem? - amazon-web-services

I'm trying to get an AWS/Lightsail Debian server automatically renewing certificates with certbot. My DNS is with Namecheap.
I'm follow the steps on https://blog.bryanroessler.com/2019-02-09-automatic-certbot-namecheap-acme-dns/ and https://blog.bryanroessler.com/2019-02-09-automatic-certbot-namecheap-acme-dns/. I keep getting a no-permission error.
I run:
sudo certbot certonly -d "*.example.com" --agree-tos --manual-public-ip-logging-ok --server https://acme-v02.api.letsencrypt.org/directory --preferred-challenges dns --manual --manual-auth-hook /etc/letsencrypt/acme-dns-auth.py --debug-challenges
I see:
Failed authorization procedure. example.com (dns-01): urn:ietf:params:acme:error:unauthorized :: The client lacks sufficient authorization :: No TXT record found at _acme-challenge.example.com
It says I need to open port 53. I followed Amazon's Lightsail instructions. Neither iptables nor ufw seems to be installed. When I nmap my machine, I don't see 53. I actually installed ufw for lack of a good idea, to no avail.
My /etc/acme-dns/config.cfg is as follows:
#/etc/acme-dns/config.cfg
[general]
# DNS interface
listen = ":53"
protocol = "udp"
# domain name to serve the requests off of
domain = "acme.example.com"
# zone name server
nsname = "ns1.acme.example.com"
# admin email address, where # is substituted with .
nsadmin = "example.example.com"
# predefined records served in addition to the TXT
records = [
"acme.example.com. A <public ip>",
"ns1.acme.example.com. A <public ip>",
"acme.example.com. NS ns1.acme.example.com.",
]
debug = false
[database]
engine = "sqlite3"
connection = "/var/lib/acme-dns/acme-dns.db"
[api]
api_domain = ""
ip = "127.0.0.1"
disable_registration = false
autocert_port = "80"
port = "8082"
tls = "none"
corsorigins = [
"*"
]
use_header = false
header_name = "X-Forwarded-For"
[logconfig]
loglevel = "debug"
logtype = "stdout"
logformat = "text"
For the listen value, I also tried 127.0.0.1:53 and :53
The settings portion of /etc/letsencrypt/acme-dns-auth.py:
# URL to acme-dns instance
ACMEDNS_URL = "http://127.0.0.1:8082"
# Path for acme-dns credential storage
STORAGE_PATH = "/etc/letsencrypt/acmedns.json"
# Whitelist for address ranges to allow the updates from
# Example: ALLOW_FROM = ["192.168.10.0/24", "::1/128"]
ALLOW_FROM = []
# Force re-registration. Overwrites the already existing acme-dns accounts.
FORCE_REGISTER = False
Thanks for any help you can provide.

If you don't wish to maintain your own acme DNS server, I built and use this script to automatically renew NameCheap wildcard certs with certbot. I hope it helps:
https://github.com/scribe777/letsencrypt-namecheap-dns-auth

Related

Django set a connection proxy

I have long time seeking for a solution to set a proxy for my Django application.
1st I am using Django==2.0 and I run it in Windows Server 2016 in a local network that uses a Proxy to connect 10.37.235.99 and Port 80.
and I'm deploying the application using nginx-1.20.1
I have to scrape a data as
http_proxy = "10.37.235.99:80"
https_proxy = "10.37.235.99:80"
ftp_proxy = "10.37.235.99:80"
proxyDict = {
"http" : http_proxy,
"https" : https_proxy,
"ftp" : ftp_proxy
}
import socket
if socket.gethostname() == "localhost":
os.environ["PROXIES"] = proxyDict
else:
os.environ["PROXIES"] = {}
URL='my_site.com'
page = requests.get(URL)
print(page)
I tried many solutions on the internet but no way!
Working with django : Proxy setup
when I remove the proxy configuration and I use Psiphon3(with proxy) everything works perfectly.
is there any solution?

Setting up load balancer frontend with on GCP with Pulumi

Right now I'm learning how to setup a website served by a GCP bucket using Pulumi however, I've stuck at the last step exposing an IP address and attaching it to the LB. Everything looks good except This load balancer has no frontend configured
I think the ForwardingRule is what I need but it doesn't except the BucketBackend (see code and output below).
Any suggestions on how to move forward?
####### WEBSITE ##########
web_bucket = gcp.storage.Bucket('web',
project="myproj",
cors=[gcp.storage.BucketCorArgs(
max_age_seconds=3600,
methods=[
"GET",
],
origins=["https://myproj.com", "https://sandbox.myproj.com"],
response_headers=["*"],
)],
force_destroy=True,
location="US",
uniform_bucket_level_access=True,
website=gcp.storage.BucketWebsiteArgs(
main_page_suffix="index.html",
not_found_page="404.html",
),
)
pulumi.export('web bucket', web_bucket.url)
ssl_certificate = gcp.compute.SSLCertificate("SSLCertificate",
project="myproj",
name_prefix="certificate-",
private_key=(lambda path: open(path).read())("ssl/private.key"),
certificate=(lambda path: open(path).read())("ssl/certificate.crt"))
http_health_check = gcp.compute.HttpHealthCheck("httphealthcheck",
project="myproj",
request_path="/",
check_interval_sec=1,
timeout_sec=1
)
# Backend Bucket Service
web_backend = gcp.compute.BackendBucket("web-backend",
project="myproj",
description="Serves website",
bucket_name=web_bucket.name,
enable_cdn=True
)
# LB Backend hostpath and rules
url_map = gcp.compute.URLMap("urlmap",
project="myproj",
description="URL mapping",
default_service=web_backend.id,
host_rules=[gcp.compute.URLMapHostRuleArgs(
hosts=["myproj.io"],
path_matcher="allpaths",
)],
path_matchers=[gcp.compute.URLMapPathMatcherArgs(
name="allpaths",
default_service=web_backend.id,
path_rules=[gcp.compute.URLMapPathMatcherPathRuleArgs(
paths=["/*"],
service=web_backend.id,
)],
)]
)
# Route to backed (bucket backend)
target_https_proxy = gcp.compute.TargetHttpsProxy("targethttpsproxy",
project="myproj",
url_map=url_map.id,
ssl_certificates=[ssl_certificate.id])
# Forwarding rule for External Network Load Balancing using Backend Services
web_forward = gcp.compute.ForwardingRule("webforward",
project="myproj",
region="us-central1",
port_range="80",
backend_service=web_backend.id # this doesn't work
)
Diagnostics:
gcp:compute:ForwardingRule (default):
error: 1 error occurred:
* Error creating ForwardingRule: googleapi: Error 400: Invalid value for field 'resource.backendService': 'https://compute.googleapis.com/compute/beta/projects/myproj/global/backendBuckets/web-backend-576fa1b'. Unexpected resource collection 'backendBuckets'., invalid
I was using the wrong fowarding rule class. Because of the LB setup regional forwarding was wrong.
# Forwarding rule for External Network Load Balancing using Backend Services
web_forward = gcp.compute.GlobalForwardingRule("webforward",
project="myproj",
port_range="443",
target=nbprod_target_https_proxy.self_link
)

Is that possible to create bulk aws ALBs using powershell script?

Is that possible to create bulk aws ALBs using powershell script?
If someone can provide Powershell script template, that would be great.
Absolutely, you can install AWS Tools for PowerShell. Check link below, there are examples there.
https://aws.amazon.com/powershell/
`# Create HTTP Listener
$HTTPListener = New-Object -TypeName ‘Amazon.ElasticLoadBalancing.Model.Listener’
$HTTPListener.Protocol = ‘http’
$HTTPListener.InstancePort = 80
$HTTPListener.LoadBalancerPort = 80
#Create HTTPS Listener
$HTTPSListener = New-Object -TypeName ‘Amazon.ElasticLoadBalancing.Model.Listener’
$HTTPSListener.Protocol = ‘http’
$HTTPSListener.InstancePort = 443
$HTTPSListener.LoadBalancerPort = 80
$HTTPSListener.SSLCertificateId = ‘YourSSL’
# Create Load Balancer
New-ELBLoadBalancer -LoadBalancerName ‘YourLoadBalancerName’ -Listeners
#($HTTPListener, $HTTPSListener) -SecurityGroups #($sgId) -Subnets #($sn1Id, $sn2Id)
-Scheme ‘internet-facing’
# Create Load Balancer
New-ELBLoadBalancer -LoadBalancerName ‘YourLoadBalancerName’ -Listeners
#($HTTPListener, $HTTPSListener) -SecurityGroups #(‘SecurityGroupId’) -Subnets
#(‘subnetId1’, ‘subnetId2’) -Scheme ‘internet-facing’
# Associate Instances with Load Balancer
Register-ELBInstanceWithLoadBalancer -LoadBalancerName ‘YourLoadBalancerName’ -
Instances #(‘instance1ID’, ‘instance2ID’)
# Create Application Cookie Stickiness Policy
New-ELBAppCookieStickinessPolicy -LoadBalancerName ‘YourLoadBalancerName’ -
PolicyName ‘SessionName’ -CookieName ‘CookieName’
# Set the Application Cookie Stickiness Policy to Load Balancer
Set-ELBLoadBalancerPolicyOfListener -LoadBalancerName ‘YourLoadBalancerName’ -
LoadBalancerPort 80 -PolicyNames ‘SessionName’`
This script is just for one elb...how to transform this scripts to create bulk elbs?
Also, where to mention AWS account credentials?

Creating Kubernetes TLS assets before i know public and private IP

Following https://coreos.com/kubernetes/docs/latest/getting-started.html , i wanted to generate my TLS assets for my kubernetes cluster.
My plan to push those keys via cloud-config to the aws-api to create EC2 instances won't work, because i won't know the public and private IPs of those instances in advance.
I though about moving the ca cert to the instances via the cloud-config, where i then, generate those assets from a script run by a systemd unit file. Biggest concern here is that i don't want to put a ca root cert into a cloud config.
Does anyone have a solution to this situation?
According to how kube-aws does it, I can set my api-server conf like this:
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = #alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = kubernetes.mydomain.de
IP.1 = 10.3.0.1
to the "minimal config file" i added
My public DNS DNS.5 = kubernetes.mydomain.de
I omit the MASTER_HOST IP address because I can instead use the FQDN (kubernetes.mydomain.de) to get to that IP
The "K8S_SERVICE_IP", which should be the first IP of my internal IP range (10.3.0.0/24): IP.2 = 10.3.0.1
The worker conf looks like this:
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = #alt_names
[alt_names]
DNS.1 = *.*.cluster.internal
The trick here is to set the SAN as a wildcard *.*.cluster.internal. This way all the workers verify with that cert on the internal network and I don't have to set the specific IP address.

Kerberos kinit: Resource temporarily unavailable while getting initial credentials

I am in the process of setting up Kerberos on a CentOS7 (more specific: the Hortonworks HDP 2.3 sandbox) running in a VirtualBox VM. My problem is that kinit seems to be unable to reach my KDC, the answer is "Resource temporarily unavailable while getting inital credentials" if I add an address in my /etc/hosts file and if I leave that file as is I get the message "could not contact any host for realm mycompany while getting initial credentials".
The KDC is running (can find it with ps plus the service starts with an "okay" message), same for kadmin.
As a guide for setting up kerberos I followed these two guides:
CentOS guide
Guide 2
My config files:
krb5.conf
[logging]
default = FILE:/var/log/krb5libs.log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
[libdefaults]
default_realm = MYCOMPANY.COM
dns_lookup_realm = true
dns_lookup_kdc = true
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
MYCOMPANY.COM = {
kdc = kerberos.mycompany.com
admin_server = kerberos.mycompany.com
}
[domain_realm]
.mycompany.com = MYCOMPANY.COM
mycompany.com = MYCOMPANY.COM
kdc.conf
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88,750
[realms]
MYCOMPANY.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
kadm5.acl
*/admin#MYCOMPANY.COM *
/etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.96.140 sandbox.hortonworks.com sandbox ambari.hortonworks.com
192.168.1.3 mycompany.com kerberos.mycompany.com
I get the "Resource..." error if I have any address in the third line of the hosts file, if that line is missing I get the "could not contact..." error.
I could trace the kinit command with something along the lines of krb5_trace or something (unfortunately I can't find the link I got it from any more nor remember the exact command) to the address specified in the host file so kinit seems to contact the fitting address, its just that the KDC does not listen there.
Netstat shows that the KDC is listening on the ports specified in the kdc.conf
Any help would be appreciated
Okay so it does work now. Things I did to fix it:
/etc/resolv.conf
mycompany.com 127.0.0.1
/etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.96.140 sandbox.hortonworks.com sandbox ambari.hortonworks.com
127.0.0.1 mycompany.com kerberos.mycompany.com
And, most embarrassing: I used kinit mycompany/admin for the principal user/admin#mycompany.com which is of course wrong.
The right call is of course kinit user/admin