Letsencrypt certificate renewal issue on Ubuntu 18.04 machine using Apache Server - django

I am using Apache servers to host my Django (v2.1) app. I've installed Letsencrypt certificate for HTTPS. Now the time of renewal has come and it is giving me some unauthorized access error.
When I run sudo certbot command, I got the following output.
/usr/lib/python3/dist-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.23) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
Which names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: noppera.tk
2: www.noppera.tk
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 2
Cert is due for renewal, auto-renewing...
Renewing an existing certificate
Performing the following challenges:
http-01 challenge for www.noppera.tk
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. www.noppera.tk (http-01): urn:ietf:params:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://www.noppera.tk/.well-known/acme-challenge/U0D416-6zOf7YRW0jAVIG8oiLthmpy_xmewRdUlwrQM [34.240.58.158]: 400
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: www.noppera.tk
Type: unauthorized
Detail: Invalid response from
http://www.noppera.tk/.well-known/acme-challenge/U0D416-6zOf7YRW0jAVIG8oiLthmpy_xmewRdUlwrQM
[34.240.58.158]: 400
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
Also if I run this for option 1, I got the same error. (If you want, I can paste that log too)
What I've already tried is the following:
Has already installed django-letsencrypt==3.0.1
Added letsencrypt in settings.py
Added the following line in urls.py url(r'^\.well-known/', include('letsencrypt.urls')),
Right now site is accessible using HTTPS. Can anyone help me out renewing the certificate?
EDIT 1
Option 1 Logs:
/usr/lib/python3/dist-packages/requests/__init__.py:80: RequestsDependencyWarning: urllib3 (1.23) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
Which names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: noppera.tk
2: www.noppera.tk
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for noppera.tk
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. noppera.tk (http-01): urn:ietf:params:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from https://noppera.tk/.well-known/acme-challenge/y6dj0WW9qDgZiBnDTmXmA5FTSusyjabeE3dZs5eEGpI [34.240.58.158]: "\n\n<html>\n<head>\n <script src=\"https://ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js\"></script>\n\n\n\n\n\n\n\n<style>\n /*"
IMPORTANT NOTES:
- The following errors were reported by the server:
Domain: noppera.tk
Type: unauthorized
Detail: Invalid response from
https://noppera.tk/.well-known/acme-challenge/y6dj0WW9qDgZiBnDTmXmA5FTSusyjabeE3dZs5eEGpI
[34.240.58.158]: "\n\n<html>\n<head>\n <script
src=\"https://ajax.googleapis.com/ajax/libs/jquery/2.2.4/jquery.min.js\"></script>\n\n\n\n\n\n\n\n<style>\n
/*"
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
Django Log for option 1 (noppera.tk)
Invalid HTTP_HOST header: '{{HOST IP}}'. You may need to add '{{HOST IP}}' to ALLOWED_HOSTS.
Bad Request: /console/login/LoginForm.jsp
Not Found: /.well-known/acme-challenge/WRiDAIe3JPBlZXVWduKBYKrmYKbyS3I2eetsth0YBD0
Django Log for option 2 (www.noppera.tk)
Invalid HTTP_HOST header: 'www.noppera.tk'. You may need to add 'www.noppera.tk' to ALLOWED_HOSTS.
Bad Request: /.well-known/acme-challenge/GTX3_zQ6XPymDUn1WVZ_27vO_XtYxPClBD5uA8Y1nhM
Right now, ALLOWED_HOSTS = ["*"]
EDIT 2
Changed ALLOWED_HOSTS = ["*"] to ALLOWED_HOSTS = ["www.noppera.tk", "*"] for Option 2, but same error.

I have found a solution. Posting to help others.
The problem was lying with the duplicate conf in the apache2/sites-available folder. There were 2 default configs and 2 custom config for my site (for each http and https). So what I did was disable the default config and reload Apache using sudo a2dissite default-ssl.conf and sudo a2dissite 000-default.conf.
After that I executed sudo certbot and it renewed certificates successfully.
Few of the useful resources are below:
https://www.jbarrett.me/blog/items/4/setting-ssl-django-app-lets-encrypt-ubuntu-apache-and-mod_wsgi
https://www.digitalocean.com/community/tutorials/how-to-install-the-apache-web-server-on-ubuntu-18-04#step-5-%E2%80%94-setting-up-virtual-hosts-(recommended)

Related

Message is being sent with "send_message" in ejabberd using postman but not received by client

I have configured and install ejabberd on the ubuntu 22.04 and I have successfully configured and create one user with administrator right and as well as create some users into it.
I am using Version
OS - ubuntu 22.04 LTS
ejabberd 22.10
And also configured ejabberd API by mod_http_api and then I test APIs with POSTMAN, almost every (link) "API reference" working fine with it except send_message.
Here is my ejabberd.yml configuration:-
hosts:
- B660M-D2H-DDR4
- localhost
- XX.XXX.37.XX
loglevel: info
ca_file: /opt/ejabberd/conf/cacert.pem
certfiles:
- /opt/ejabberd/conf/server.pem
## If you already have certificates, list them here
# certfiles:
# - /etc/letsencrypt/live/domain.tld/fullchain.pem
# - /etc/letsencrypt/live/domain.tld/privkey.pem
listen:
-
port: 5222
ip: "::"
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: true
-
port: 5223
ip: "::"
tls: true
module: ejabberd_c2s
max_stanza_size: 262144
shaper: c2s_shaper
access: c2s
starttls_required: true
-
port: 5269
ip: "::"
module: ejabberd_s2s_in
max_stanza_size: 524288
-
port: 5443
ip: "::"
module: ejabberd_http
tls: true
request_handlers:
/admin: ejabberd_web_admin
/api: mod_http_api
/bosh: mod_bosh
/captcha: ejabberd_captcha
/upload: mod_http_upload
/ws: ejabberd_http_ws
-
port: 5280
ip: "::"
module: ejabberd_http
request_handlers:
/admin: ejabberd_web_admin
/.well-known/acme-challenge: ejabberd_acme
/api: mod_http_api
-
port: 3478
ip: "::"
transport: udp
module: ejabberd_stun
use_turn: true
## The server's public IPv4 address:
# turn_ipv4_address: "203.0.113.3"
## The server's public IPv6 address:
# turn_ipv6_address: "2001:db8::3"
-
port: 1883
ip: "::"
module: mod_mqtt
backlog: 1000
s2s_use_starttls: optional
acl:
admin:
user: "admin#localhost"
local:
user_regexp: ""
loopback:
ip:
- 127.0.0.0/8
- ::1/128
access_rules:
local:
allow: local
allow: XX.XXX.37.XX
c2s:
deny: blocked
allow: all
announce:
allow: admin
configure:
allow: admin
muc_create:
allow: local
pubsub_createnode:
allow: local
trusted_network:
allow: loopback
api_permissions:
"console commands":
from:
- ejabberd_ctl
who: all
what: "*"
"admin access":
who:
access:
allow:
- acl: admin
oauth:
scope: "ejabberd:admin"
access:
allow:
- acl: admin
what:
- "*"
- "!stop"
- "!start"
"public commands":
who:
ip: 127.0.0.1/8
what:
- status
- connected_users_number
shaper:
normal:
rate: 3000
burst_size: 20000
fast: 100000
shaper_rules:
max_user_sessions: 10
max_user_offline_messages:
5000: admin
100: all
c2s_shaper:
none: admin
normal: all
s2s_shaper: fast
modules:
mod_adhoc: {}
mod_admin_extra: {}
mod_announce:
access: announce
mod_avatar: {}
mod_blocking: {}
mod_bosh: {}
mod_caps: {}
mod_carboncopy: {}
mod_client_state: {}
mod_configure: {}
mod_disco: {}
mod_fail2ban: {}
mod_http_api: {}
mod_http_upload:
put_url: https://#HOST#:5443/upload
custom_headers:
"Access-Control-Allow-Origin": "https://#HOST#"
"Access-Control-Allow-Methods": "GET,HEAD,PUT,OPTIONS"
"Access-Control-Allow-Headers": "Content-Type"
mod_last: {}
mod_mam:
## Mnesia is limited to 2GB, better to use an SQL backend
## For small servers SQLite is a good fit and is very easy
## to configure. Uncomment this when you have SQL configured:
## db_type: sql
assume_mam_usage: true
default: always
mod_mqtt: {}
mod_muc:
access:
- allow
access_admin:
- allow: admin
access_create: muc_create
access_persistent: muc_create
access_mam:
- allow
default_room_options:
mam: true
mod_muc_admin: {}
mod_offline:
access_max_user_messages: max_user_offline_messages
mod_ping: {}
mod_privacy: {}
mod_private: {}
mod_proxy65:
access: local
max_connections: 5
mod_pubsub:
access_createnode: pubsub_createnode
plugins:
- flat
- pep
force_node_config:
## Avoid buggy clients to make their bookmarks public
storage:bookmarks:
access_model: whitelist
mod_push: {}
mod_push_keepalive: {}
mod_register:
## Only accept registration requests from the "trusted"
## network (see access_rules section above).
## Think twice before enabling registration from any
## address. See the Jabber SPAM Manifesto for details:
## https://github.com/ge0rg/jabber-spam-fighting-manifesto
ip_access: trusted_network
mod_roster:
versioning: true
mod_s2s_dialback: {}
mod_shared_roster: {}
mod_stream_mgmt:
resend_on_timeout: if_offline
mod_stun_disco: {}
mod_vcard: {}
mod_vcard_xupdate: {}
mod_version:
show_os: false
I have two observation with messages (send_message):
client to client (PSI)
postman to Client
In first observation I have success to exchange messages between users on psi(PSI) but when we try to send message with postman using "mod_http_api" API to the client, I am getting result 200 ok, but Message is not being delivered. And it is not showing anywhere (logs).
Am I missing something that is important for receiving a message using ejabberd's REST API with postman?
What a strange problem, I cannot reproduce it. You didn't show your command query, and didn't mention what exact client and configuration you are using.
Summary: Check if the command works correctly when using the ejabberdctl command line tool, and use "normal" message type, and send to bare JID, and use another client for example Gajim (just for debugging the problem).
Details:
I installed ejabberd 22.10 from source code, copied your configuration, disabled cert and tls options, started ejabberd, registered account, logged in it and executed this command:
$ ejabberdctl send_message headline uuu#localhost user1#localhost Restart aaa
The client that was logged in user1#localhost received the stanza, and displayed the headline message:
<message to='user1#localhost'
from='uuu#localhost'
type='headline'
id='18154938236359942834'>
<body>aaa</body>
<subject>Restart</subject>
</message>
Please note: in XMPP, "headline" messages are not stored in the offline storage: they are only received by online sessions with positive priority. Maybe you are sending "headline" messages to sessions that are offline, or online with negative priority, or online with no initial presence?
It's preferable to send a "normal" message, which are stored offline:
ejabberdctl send_message normal uuu#localhost user1#localhost ThisisNormal bbb
Also, make sure your client is logged in with positive priority (this is the standard).

ID apache-service-running in SLS apache.service.running is not a dictionary

I'm trying to install and configure apache using the v1.2.2 saltstack apache-formula
salt server-test state.apply apache test=true
But keep getting the following error:
server-test:
Data failed to compile:
ID apache-service-running in SLS apache.service.running is not a dictionary
my master salt path looks like that:
file_roots:
base:
- /srv/salt/states
- /srv/salt/formulas/php
- /srv/salt/formulas/nginx
- /srv/salt/formulas/apache-formula
- /srv/salt/formulas/mysql-formula
- /srv/salt/files
pillar_roots:
base:
- /srv/salt/pillar/base
dev:
- /srv/salt/pillar/dev
prod:
- /srv/salt/pillar/prod
I can't figure out where the problem is! any hints?
pillar/base/apache/server-test.sls
apache:
manage_service_states: False
lookup:
version: '2.4'
# for each site name there must be a config in salt://configs/webserver/apache2/sites/
site_names: [ 'server-test' ]
security:
ServerTokens: Prod
modules:
enabled:
- ssl
- alias
- rewrite
- headers
- shib
- http2
#disabled:
# - php7.3
# others are managed elsewhere
# - status
# - proxy
#- proxy_fcgi
mpm:
module: mpm_event
params:
start_servers: 3
min_spare_threads: 50
max_spare_threads: 100
thread_limit: 64
threads_per_child: 25
max_request_workers: 2000
server_limit: 80
max_connections_per_child: 0
The formula is broken: #383
manage_service_states: False doesn't work.

codeclimate validate-config Error

Am new to code climate and am facing this error when i run my github project on codeclimate.
codeclimate validate-config
ERROR: Unable to parse: (<unknown>): found unexpected end of stream while scanning a quoted scalar at line 23 column 5
Below is my .codeclimate.yml file :
---
machine:
environment:
CODECLIMATE_REPO_TOKEN: ab24b326dac817e772c5246823b67af66e2358e51134c33e20aaf7fb228088b0
engines:
duplication:
enabled: false
config:
languages:
- python
fixme:
enabled: true
pep8:
enabled: true
radon:
enabled: true
ratings:
paths:
- "**.py"
exclude_paths:
- "docs/*"
- "examples/*
-*api/songs/models*
-*/site-packages/*
-*markupsafe/*
-*psycopg2/*
-*six.py*
-*sqlalchemy/*
-*werkzeug/*
-*stringprep.py*
-*uuid.py*
-*ctypes/*
-*decimal.py*
-*encodings/*
-*hmac.py*
-*asyncio/*
-*concurrent/*
-*multiprocessing/*
-*mimetypes.py*
-*numbers.py*
-*pydoc.py*
-*http/*
-*app/api/user/__init__.py*
-*app/api/request/__init__.py*
-*app/api/__init__.py*
-*app/__init__.py*
-*app/config.py*
-*app/model/*
-*test/*
-*html/*
-*_bootlocale.py*
-*typing.py*
The line line 23 which is in the error message is as below in the file above:
- "examples/*
What should i do to correct this?
The problem with that line is it's missing the closing quotation mark. However, all of the exclude_paths patterns will need to be enclosed in quotes; i.e. the config should look like this:
---
machine:
environment:
CODECLIMATE_REPO_TOKEN: ab24b326dac817e772c5246823b67af66e2358e51134c33e20aaf7fb228088b0
engines:
duplication:
enabled: false
config:
languages:
- python
fixme:
enabled: true
pep8:
enabled: true
radon:
enabled: true
ratings:
paths:
- "**.py"
exclude_paths:
- "docs/*"
- "examples/*"
- "*api/songs/models*"
- "*/site-packages/*"
- "*markupsafe/*"
- "*psycopg2/*"
- "*six.py*"
- "*sqlalchemy/*"
- "*werkzeug/*"
- "*stringprep.py*"
- "*uuid.py*"
- "*ctypes/*"
- "*decimal.py*"
- "*encodings/*"
- "*hmac.py*"
- "*asyncio/*"
- "*concurrent/*"
- "*multiprocessing/*"
- "*mimetypes.py*"
- "*numbers.py*"
- "*pydoc.py*"
- "*http/*"
- "*app/api/user/__init__.py*"
- "*app/api/request/__init__.py*"
- "*app/api/__init__.py*"
- "*app/__init__.py*"
- "*app/config.py*"
- "*app/model/*"
- "*test/*"
- "*html/*"
- "*_bootlocale.py*"
- "*typing.py*"
Note: You'll still see some deprecation warnings because you're using the old .codeclimate.yml format. There is information about converting from the old format to version 2 in the CodeClimate docs:
https://docs.codeclimate.com/docs/advanced-configuration#section-analysis-configuration-versions

Django + Postfix + Gmail + openDKIM -avamis > dkim = neutral (body hash did not verify)

I have followed this tutorial to configure DKIM and Postfix on Debian 7 wheezy. These instructions are pretty much a standard on the interwebz.
I am using Gmail to send and receive emails using my own domain. I followed this instructions to achieve that.
My problem
I can send and receive emails but I can't manage to pass the DKIM test (at least with Gmail). After searching and struggling for a while I have come to the conclusion that the reason of my woes is that my message is getting multiple DKIM signatures (see mail.log below). And this, according to the DKIM directives is enough for the DKIM to fail.
But after reading about on how to solve the multiple signatures problem I found that absolutely all of these solutions refer to having 'amavis' installed. Thing is...I don't have it installed!
In any case these solutions mention changing postfix configurations related to the milters in master.cnf and/or main.cfn. For example, adding this to the 'receive_override_options' (again, I don't have that variable since I don't have amavis installed) should solve the issue:
receive_override_options=no_unknown_recipient_checks,no_header_body_checks,no_milters
Or, another solution is commenting global settings in main.cf...
#smtpd_milters = inet:localhost:12301
#non_smtpd_milters = inet:localhost:12301
...And then adding the milter directive to the "smtpd" and post-amavis services
for inbound authentication and outbound signing respectively:
# inbound messages from internet
# will be authenticated by OpenDKIM milter on port 12301
smtp inet n - - - - smtpd
.......
-o smtpd_milters=inet:localhost:12301
# outbound messages have been through amavis
# will be signed by OpenDKIM milter on port 12301
127.0.0.1:10025 inet n - - - - smtpd
.......
-o smtpd_milters=inet:localhost:12301
Alas, none of this works for me because I don't have the amavis installed. So what I think is happening is that the Django layer messes with Postfix somehow so the email message gets twice DKIM signed by opendkim (see mail.log below).
This is Gmail response:
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816;
h=message-id:date:to:from:subject:content-transfer-encoding
:mime-version:dkim-signature:arc-authentication-results;
bh=rFbauTH/rtd1+kK8TxaFUe3HjRRJjkoamWIx2IdGVtM=;
b=MKXMH0s3t4rJtnbq1NTX/3Pu7WroJ1/QcMEyAMdQQhF4pFM1imdRTA==
ARC-Authentication-Results: i=1; mx.google.com;
dkim=neutral (body hash did not verify) header.i=#domain.com header.s=mail header.b=X2M3CvND;
spf=pass (google.com: domain of error#domain.com designates 45.76.171.123 as permitted sender) smtp.mailfrom=error#domain.com
This is my mail.log. Notice how the 'DKIM signature header add' is executed two times: once after localhost connects and, another one after it reconnects again.
Aug 25 16:12:21 domain postfix/smtpd[29238]: connect from localhost[127.0.0.1]
Aug 25 16:12:21 domain postfix/smtpd[29238]: 745D37D599: client=localhost[127.0.0.1]
Aug 25 16:12:21 domain postfix/cleanup[29243]: 745D37D599: message-id=<20170825161221.28656.25384#localhost>
Aug 25 16:12:21 domain opendkim[27899]: 745D37D599: DKIM-Signature header added (s=mail, d=domain.com)
Aug 25 16:12:21 domain postfix/qmgr[29037]: 745D37D599: from=<error#domain.com>, size=44876, nrcpt=1 (queue active)
Aug 25 16:12:21 domain postfix/smtpd[29238]: disconnect from localhost[127.0.0.1]
Aug 25 16:12:21 domain postfix/smtpd[29238]: connect from localhost[127.0.0.1]
Aug 25 16:12:21 domain postfix/smtpd[29238]: 8E8287D5C0: client=localhost[127.0.0.1]
Aug 25 16:12:21 domain postfix/cleanup[29243]: 8E8287D5C0: message-id=<20170825161221.28656.34673#localhost>
Aug 25 16:12:21 domain opendkim[27899]: 8E8287D5C0: DKIM-Signature header added (s=mail, d=domain.com)
Aug 25 16:12:21 domain postfix/qmgr[29037]: 8E8287D5C0: from=<error#domain.com>, size=44876, nrcpt=1 (queue active)
Aug 25 16:12:21 domain postfix/smtpd[29238]: disconnect from localhost[127.0.0.1]
Aug 25 16:12:22 domain postfix/smtp[29244]: 745D37D599: to=<user#gmail.com>, orig_to=<error#domain.com>, relay=gmail-smtp-in.l.google.com[74.125.28.26]:25, delay=0.61, delays=0.05/0.02/0.12/0.41, dsn=2.0.0, status=sent (250 2.0.0 OK 1503677542 r29si5009980pfd.56 - gsmtp)
Aug 25 16:12:22 domain postfix/qmgr[29037]: 745D37D599: removed
Aug 25 16:12:22 domain postfix/smtp[29245]: 8E8287D5C0: to=<user#gmail.com>, orig_to=<error#domain.com>, relay=gmail-smtp-in.l.google.com[74.125.28.26]:25, delay=0.51, delays=0.05/0.01/0.09/0.36, dsn=2.0.0, status=sent (250 2.0.0 OK 1503677542 t196si4944733pgc.158 - gsmtp)
Aug 25 16:12:22 domain postfix/qmgr[29037]: 8E8287D5C0: removed
Probably the localhost user seen in the mail.log is the one set up in my Django's settings.py
EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
EMAIL_HOST = 'localhost'
EMAIL_PORT = 25
EMAIL_HOST_USER = ''
EMAIL_HOST_PASSWORD = ''
EMAIL_USE_TLS = False
My /etc/postfix/master.cf
smtp inet n - - - - smtpd
pickup fifo n - - 60 1 pickup
cleanup unix n - - - 0 cleanup
qmgr fifo n - n 300 1 qmgr
tlsmgr unix - - - 1000? 1 tlsmgr
rewrite unix - - - - - trivial-rewrite
bounce unix - - - - 0 bounce
defer unix - - - - 0 bounce
trace unix - - - - 0 bounce
verify unix - - - - 1 verify
flush unix n - - 1000? 0 flush
proxymap unix - - n - - proxymap
proxywrite unix - - n - 1 proxymap
smtp unix - - - - - smtp
relay unix - - - - - smtp
showq unix n - - - - showq
error unix - - - - - error
retry unix - - - - - error
discard unix - - - - - discard
local unix - n n - - local
virtual unix - n n - - virtual
lmtp unix - - - - - lmtp
anvil unix - - - - 1 anvil
scache unix - - - - 1 scache
maildrop unix - n n - - pipe
flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
uucp unix - n n - - pipe
flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)
ifmail unix - n n - - pipe
flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)
bsmtp unix - n n - - pipe
flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient
scalemail-backend unix - n n - 2 pipe
flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}
mailman unix - n n - - pipe
flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py
${nexthop} ${user}
My /etc/postfix/main.cf
smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no
append_dot_mydomain = no
readme_directory = no
smtpd_tls_cert_file = /etc/postfix/server.pem
smtpd_tls_key_file = $smtpd_tls_cert_file
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
myhostname = automatones.com
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
mydestination = localhost.localdomain, localhost, domain.com
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
inet_protocols = ipv4
virtual_alias_maps = hash:/etc/postfix/virtual
smtpd_sasl_type = dovecot
smtpd_sasl_auth_enable = yes
queue_directory = /var/spool/postfix
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes
smtpd_sasl_type = dovecot
milter_protocol = 2
milter_default_action = accept
smtpd_milters = inet:localhost:12301
non_smtpd_milters = inet:localhost:12301
virtual_alias_domains = domain.com
My http://dkimvalidator.com/results:
SpamAssassin Score: 0.472
Message is NOT marked as spam
Points breakdown:
-0.0 SPF_HELO_PASS SPF: HELO matches SPF record
0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid
0.4 RDNS_DYNAMIC Delivered to internal network by host with
dynamic-looking rDNS
0.0 T_DKIM_INVALID DKIM-Signature header exists but is not valid
So how can I tweak the postfix configuration for not to trigger the multiple DKMI header signature additions I see in the mail.log? Or how do I configure Django settings for not to trigger this connect and reconnect behaviour?
Any pointers, ideas or suggestions are welcome!
Edit: I found this note in the README file of OPENDMARC Could I get a solution out of this? If so, how can I start implementing it?
(c) If you have a content filter in master.cf that feeds it back into a different smtpd process, you should alter the second smtpd process in master.cf to contain '-o receive_override_options=no_milters' to prevent messages being signed or verified twice. For tips on avoiding DKIM signature breakage, see: http://www.postfix.org/MILTER_README.html#workarounds
I managed to fix this some time ago after several trails and errors. Problem is I don't remember what exactly I did to solve it. I was playing with several parameters -sometimes simultaneously- without keeping any track (my bad). However, at some point everything worked!
Here are my configuration files. Hope they can guide/help someone.
/etc/postfix/master.cf
#
# Postfix master process configuration file. For details on the format
# of the file, see the master(5) manual page (command: "man 5 master").
#
# Do not forget to execute "postfix reload" after editing this file.
#
# ==========================================================================
# service type private unpriv chroot wakeup maxproc command + args
# (yes) (yes) (yes) (never) (100)
# ==========================================================================
smtp inet n - - - - smtpd
-o content_filter=spamassassin
#smtp inet n - - - 1 postscreen
#smtpd pass - - - - - smtpd
#dnsblog unix - - - - 0 dnsblog
#tlsproxy unix - - - - 0 tlsproxy
submission inet n - - - - smtpd
-o syslog_name=postfix/submission
-o smtpd_tls_security_level=encrypt
-o smtpd_sasl_auth_enable=yes
-o smtpd_recipient_restrictions=permit_mynetworks,permit_sasl_authenticated,reject
-o milter_macro_daemon_name=ORIGINATING
-o content_filter=spamassassin
-o smtpd_sasl_type=dovecot
-o smtpd_sasl_path=private/auth
smtps inet n - - - - smtpd
-o syslog_name=postfix/smtps
-o smtpd_tls_wrappermode=yes
-o smtpd_sasl_auth_enable=yes
-o smtpd_client_restrictions=permit_sasl_authenticated,reject
-o milter_macro_daemon_name=ORIGINATING
-o content_filter=spamassassin
#628 inet n - - - - qmqpd
pickup fifo n - - 60 1 pickup
cleanup unix n - - - 0 cleanup
qmgr fifo n - n 300 1 qmgr
#qmgr fifo n - n 300 1 oqmgr
tlsmgr unix - - - 1000? 1 tlsmgr
rewrite unix - - - - - trivial-rewrite
bounce unix - - - - 0 bounce
defer unix - - - - 0 bounce
trace unix - - - - 0 bounce
verify unix - - - - 1 verify
flush unix n - - 1000? 0 flush
proxymap unix - - n - - proxymap
proxywrite unix - - n - 1 proxymap
smtp unix - - - - - smtp
relay unix - - - - - smtp
# -o smtp_helo_timeout=5 -o smtp_connect_timeout=5
showq unix n - - - - showq
error unix - - - - - error
retry unix - - - - - error
discard unix - - - - - discard
local unix - n n - - local
virtual unix - n n - - virtual
lmtp unix - - - - - lmtp
anvil unix - - - - 1 anvil
scache unix - - - - 1 scache
#
# ====================================================================
# Interfaces to non-Postfix software. Be sure to examine the manual
# pages of the non-Postfix software to find out what options it wants.
#
# Many of the following services use the Postfix pipe(8) delivery
# agent. See the pipe(8) man page for information about ${recipient}
# and other message envelope options.
# ====================================================================
#
# maildrop. See the Postfix MAILDROP_README file for details.
# Also specify in main.cf: maildrop_destination_recipient_limit=1
#
maildrop unix - n n - - pipe
flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
#
# ====================================================================
#
# Recent Cyrus versions can use the existing "lmtp" master.cf entry.
#
# Specify in cyrus.conf:
# lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4
#
# Specify in main.cf one or more of the following:
# mailbox_transport = lmtp:inet:localhost
# virtual_transport = lmtp:inet:localhost
#
# ====================================================================
#
# Cyrus 2.1.5 (Amos Gouaux)
# Also specify in main.cf: cyrus_destination_recipient_limit=1
#
#cyrus unix - n n - - pipe
# user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}
#
# ====================================================================
# Old example of delivery via Cyrus.
#
#old-cyrus unix - n n - - pipe
# flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user}
#
# ====================================================================
#
# See the Postfix UUCP_README file for configuration details.
#
uucp unix - n n - - pipe
flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)
#
# Other external delivery methods.
#
ifmail unix - n n - - pipe
flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)
bsmtp unix - n n - - pipe
flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient
scalemail-backend unix - n n - 2 pipe
flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}
mailman unix - n n - - pipe
flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py
${nexthop} ${user}
dovecot unix - n n - - pipe
flags=DRhu user=email:email argv=/usr/lib/dovecot/deliver -f ${sender} -d ${recipient}
policy-spf unix - n n - - spawn
user=nobody argv=/usr/bin/policyd-spf
spamassassin unix - n n - - pipe
user=debian-spamd argv=/usr/bin/spamc -f -e /usr/sbin/sendmail -oi -f ${sender} ${recipient}
/etc/postfix/main.cf
# See /usr/share/postfix/main.cf.dist for a commented, more complete version
# Debian specific: Specifying a file name will cause the first
# line of that file to be used as the name. The Debian default
# is /etc/mailname.
#myorigin = /etc/mailname
smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no
# appending .domain is the MUA's job.
append_dot_mydomain = no
# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h
readme_directory = no
# Network information
myhostname = mail.mysite.com
mydomain = mysite.com
myorigin = /etc/mailname
mydestination = $myhostname, $mydomain, localhost.localdomain, localhost
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
# Local alias map
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
queue_directory=/var/spool/postfix
# SSL
smtpd_tls_cert_file = /etc/letsencrypt/live/mail.mysite.com/fullchain.pem
smtpd_tls_key_file = /etc/letsencrypt/live/mail.mysite.com/privkey.pem
smtpd_use_tls=yes
smtpd_tls_auth_only=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtpd_tls_security_level = may
smtp_tls_security_level = may
smtp_tls_loglevel = 1
smtpd_tls_loglevel = 1
smtpd_tls_protocols = !SSLv2, !SSLv3
# SPF
policy-spf_time_limit = 3600s
# https://www.digitalocean.com/community/tutorials/how-to-set-up-a-postfix-e-mail-server-with-dovecot
#local_recipient_maps = proxy:unix:passwd.byname $alias_maps
# Virtual alias mapping
virtual_alias_domains = $mydomain
virtual_alias_maps = hash:/etc/postfix/virtual
# Mail will be stored in users ~/Maildir directories
home_mailbox = Maildir/
mailbox_command =
# From http://wiki2.dovecot.org/HowTo/PostfixAndDovecotSASL
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes
# DKIM & DMARC
milter_default_action = accept
milter_protocol = 6
smtpd_milters = inet:127.0.0.1:12345, inet:127.0.0.1:8893
non_smtpd_milters = inet:127.0.0.1:12345, inet:127.0.0.1:8893
# Require a valid HELO or EHLO command with a fully qualified domain name to stop common spambots
smtpd_helo_required = yes
smtpd_helo_restrictions = reject_non_fqdn_helo_hostname,reject_invalid_helo_hostname,reject_unknown_helo_hostname
# Disable the VRFY command
disable_vrfy_command = yes
# Reject message to allow Postfix to log recipient address information when the connected client breaks any of the reject rules
smtpd_delay_reject = yes
# Reject connections from made up addresses that do not use a FQDN or don't exist. Add external spam filters like Spamhaus or CBL blacklists
# Also add SPF policy (after reject_unauth_destination and after permit_sasl_authenticated)
smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination,check_policy_service unix:private/policy-spf,reject_invalid_hostname,reject_non_fqdn_hostname,reject_non_fqdn_sender,reject_non_fqdn_recipient,reject_unknown_sender_domain,reject_rbl_client sbl.spamhaus.org,reject_rbl_client cbl.abuseat.org
# https://serverfault.com/questions/559088/postfix-not-accepting-relay-from-localhost
default_transport = smtp
relay_transport = relay

ec2-import-instance makes an instance with no Public IP

This is related to my previous question. Basically, to summarize: I
1) Set up a vagrant ubuntu 14.04 box locally
2) Packaged the vagrant instance into a package.box following these instructions
3) Converted the package.box into a .vmdk file using this function
4) Ran the following CLI command:
ec2-import-instance tmpdir/box-disk1.vmdk -f VMDK -t t2.micro -a x86_64 -b <S3 Bucket> -o $AWS_ACCESS_KEY -w $AWS_SECRET_KEY -p Linux
Since I suspected the problem was with something called cloud-init I read about (but have never used/don't really know what it does), I tried the above twice: once with the original /etc/cloud/cloud.cfg file and again with the /etc/cloud/cloud.cfg file I found here.
Basically, what I'm eventually seeing in the AWS Console is a running instance that does not have a Public IP address. I attached an Elastic IP to the instance, but I can't ssh into that IP address for some reason - it says port 22: Connection refused
I'm at a loss because these instances are launching in the Default VPC which has a security group attached to it that allows all ports and all protocols from any IP.
By the way: I'm pretty new to all of AWS and don't really know my way fully around the console, so any direct guidance would be much appreciated.
Original /etc/cloud/cloud.cfg file:
# The top level settings are used as module
# and system configuration.
# A set of users which may be applied and/or used by various modules
# when a 'default' entry is found it will reference the 'default_user'
# from the distro configuration specified below
users:
- default
# If this is set, 'root' will not be able to ssh in and they
# will get a message to login instead as the above $user (ubuntu)
disable_root: true
# This will cause the set+update hostname module to not operate (if true)
preserve_hostname: false
# Example datasource config
# datasource:
# Ec2:
# metadata_urls: [ 'blah.com' ]
# timeout: 5 # (defaults to 50 seconds)
# max_wait: 10 # (defaults to 120 seconds)
# The modules that run in the 'init' stage
cloud_init_modules:
- migrator
- seed_random
- bootcmd
- write-files
- growpart
- resizefs
- set_hostname
- update_hostname
- update_etc_hosts
- ca-certs
- rsyslog
- users-groups
- ssh
# The modules that run in the 'config' stage
cloud_config_modules:
# Emit the cloud config ready event
# this can be used by upstart jobs for 'start on cloud-config'.
- emit_upstart
- disk_setup
- mounts
- ssh-import-id
- locale
- set-passwords
- grub-dpkg
- apt-pipelining
- apt-configure
- package-update-upgrade-install
- landscape
- timezone
- puppet
- chef
- salt-minion
- mcollective
- disable-ec2-metadata
- runcmd
- byobu
# The modules that run in the 'final' stage
cloud_final_modules:
- rightscale_userdata
- scripts-vendor
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
- ssh-authkey-fingerprints
- keys-to-console
- phone-home
- final-message
- power-state-change
# System and/or distro specific settings
# (not accessible to handlers/transforms)
system_info:
# This will affect which distro class gets used
distro: ubuntu
# Default user name + that default users groups (if added/used)
default_user:
name: ubuntu
lock_passwd: True
gecos: Ubuntu
groups: [adm, audio, cdrom, dialout, dip, floppy, netdev, plugdev, sudo, video]
sudo: ["ALL=(ALL) NOPASSWD:ALL"]
shell: /bin/bash
# Other config here will be given to the distro class and/or path classes
paths:
cloud_dir: /var/lib/cloud/
templates_dir: /etc/cloud/templates/
upstart_dir: /etc/init/
package_mirrors:
- arches: [i386, amd64]
failsafe:
primary: http://archive.ubuntu.com/ubuntu
security: http://security.ubuntu.com/ubuntu
search:
primary:
- http://%(ec2_region)s.ec2.archive.ubuntu.com/ubuntu/
- http://%(availability_zone)s.clouds.archive.ubuntu.com/ubuntu/
- http://%(region)s.clouds.archive.ubuntu.com/ubuntu/
security: []
- arches: [armhf, armel, default]
failsafe:
primary: http://ports.ubuntu.com/ubuntu-ports
security: http://ports.ubuntu.com/ubuntu-ports
ssh_svcname: ssh
Second try /etc/cloud/cloud.cfg file:
users:
- default
disable_root: 1
ssh_pwauth: 0
locale_configfile: /etc/sysconfig/i18n
mount_default_fields: [~, ~, 'auto', 'defaults,nofail', '0', '2']
resize_rootfs_tmp: /dev
ssh_deletekeys: 0
ssh_genkeytypes: ~
syslog_fix_perms: ~
cloud_init_modules:
- bootcmd
- write-files
- resizefs
- set_hostname
- update_hostname
- update_etc_hosts
- rsyslog
- users-groups
- ssh
cloud_config_modules:
- mounts
- locale
- set-passwords
- timezone
- runcmd
cloud_final_modules:
- scripts-per-once
- scripts-per-boot
- scripts-per-instance
- scripts-user
- ssh-authkey-fingerprints
- keys-to-console
- final-message
system_info:
distro: rhel
default_user:
name: ec2-user
paths:
cloud_dir: /var/lib/cloud
templates_dir: /etc/cloud/templates
ssh_svcname: sshd
EOF
This is happening because when you transferred the instance to AWS from your local there was no any PEM key associated with that instance due to which you were not able to SSH.
After you took an Image of your instance and launched the instance again with a associated key you were able to SSH into the instance.