I am trying to have Airflow email me using AWS SES whenever a task in my DAG fails to run or retries to run. I am using my AWS SES credentials rather than my general AWS credentials too.
My current airflow.cfg
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = emailsmtpserver.region.amazonaws.com
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
smtp_user = REMOVEDAWSACCESSKEY
smtp_password = REMOVEDAWSSECRETACCESSKEY
smtp_port = 25
smtp_mail_from = myemail#myjob.com
Current task in my DAG that is designed to intentionally fail and retry:
testfaildag_library_install_jar_jdbc = PythonOperator(
task_id='library_install_jar',
retries=3,
retry_delay=timedelta(seconds=15),
python_callable=add_library_to_cluster,
params={'_task_id': 'cluster_create', '_cluster_name': CLUSTER_NAME, '_library_path':s3000://fakepath.jar},
dag=dag,
email_on_failure=True,
email_on_retry=True,
email=’myname#myjob.com’,
provide_context=True
)
Everything works as designed as the task retries the set number of times and ultimately fails, except no emails are being sent. I have checked the logs in the task mentioned above too, and smtp is never mentioned.
I've looked at the similar question here, but the only solution there did not work for me. Additionally, Airflow's documentation such as their example here does not seem to work for me either.
Does SES work with Airflow's email_on_failure and email_on_retry functions?
What I am currently thinking of doing is using the on_failure_callback function to call a python script provided by AWS here to send an email on failure, but that is not the preferable route at this point.
Thank you, appreciate any help.
--updated 6/8 with working SES
here's my write up on how we got it all working. There is a small summary at the bottom of this answer.
Couple of big points:
We decided not to use Amazon SES, and rather use sendmail We now have SES up and working.
It is the airflow worker that services the email_on_failure and email_on_retry features. You can do journalctl –u airflow-worker –f to monitor it during a Dag run. On your production server, you do NOT need to restart your airflow-worker after changing your airflow.cfg with new smtp settings - it should be automatically picked up. No need to worry about messing up currently running Dags.
Here is the technical write-up on how to use sendmail:
Since we changed from ses to sendmail on localhost, we had to change our smtp settings in the airflow.cfg.
The new config is:
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = localhost
smtp_starttls = False
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
#smtp_user = not used
#smtp_password = not used
smtp_port = 25
smtp_mail_from = myjob#mywork.com
This works in both production and local airflow instances.
Some common errors one might receive if their config is not like mine above:
socket.error: [Errno 111] Connection refused -- you must change your smtp_host line in airflow.cfg to localhost
smtplib.SMTPException: STARTTLS extension not supported by server. -- you must change your smtp_starttls in airflow.cfg to False
In my local testing, I tried to simply force airflow to show a log of what was going on when it tried to send an email – I created a fake dag as follows:
# Airflow imports
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.operators.bash_operator import BashOperator
from airflow.operators.dummy_operator import DummyOperator
# General imports
from datetime import datetime,timedelta
def throwerror():
raise ValueError("Failure")
SPARK_V_2_2_1 = '3.5.x-scala2.11'
args = {
'owner': ‘me’,
'email': ['me#myjob'],
'depends_on_past': False,
'start_date': datetime(2018, 5,24),
'end_date':datetime(2018,6,28)
}
dag = DAG(
dag_id='testemaildag',
default_args=args,
catchup=False,
schedule_interval="* 18 * * *"
)
t1 = DummyOperator(
task_id='extract_data',
dag=dag
)
t2 = PythonOperator(
task_id='fail_task',
dag=dag,
python_callable=throwerror
)
t2.set_upstream(t1)
If you do the journalctl -u airflow-worker -f, you can see that the worker says that it has sent an alert email on the failure to the email in your DAG, but we were still not receiving the email. We then decided to look into the mail logs of sendmail by doing cat /var/log/maillog. We saw a log like this:
Jun 5 14:10:25 production-server-ip-range postfix/smtpd[port]: connect from localhost[127.0.0.1]
Jun 5 14:10:25 production-server-ip-range postfix/smtpd[port]: ID: client=localhost[127.0.0.1]
Jun 5 14:10:25 production-server-ip-range postfix/cleanup[port]: ID: message-id=<randomMessageID#production-server-ip-range-ec2-instance>
Jun 5 14:10:25 production-server-ip-range postfix/smtpd[port]: disconnect from localhost[127.0.0.1]
Jun 5 14:10:25 production-server-ip-range postfix/qmgr[port]: MESSAGEID: from=<myjob#mycompany.com>, size=1297, nrcpt=1 (queue active)
Jun 5 14:10:55 production-server-ip-range postfix/smtp[port]: connect to aspmx.l.google.com[smtp-ip-range]:25: Connection timed out
Jun 5 14:11:25 production-server-ip-range postfix/smtp[port]: connect to alt1.aspmx.l.google.com[smtp-ip-range]:25: Connection timed out
So this is probably the biggest "Oh duh" moment. Here we are able to see what is actually going on in our smtp service. We used telnet to confirm that we were not able to connect to the targeted IP ranges from gmail.
We determined that the email was attempting to be sent, but that the sendmail service was unable to connect to the ip ranges successfully.
We decided to allow all outbound traffic on port 25 in AWS (as our airflow production environment is an ec2 instance), and it now works successfully. We are now able to receive emails on failures and retries (tip: email_on_failure and email_on_retry are defaulted as True in your DAG API Reference - you do not need to put it into your args if you do not want to, but it is still good practice to explicitly state True or False in it).
SES now works. Here is the airflow config:
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = emailsmtpserver.region.amazonaws.com
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
smtp_user = REMOVEDAWSACCESSKEY
smtp_password = REMOVEDAWSSECRETACCESSKEY
smtp_port = 587
smtp_mail_from = myemail#myjob.com (Verified SES email)
Thanks!
Similar case here, I tried to follow the same debugging process but got no log output. Also, the outbound rule for my airflow ec2 instance is open to all ports and ips, so it should be some other causes.
I noticed that when you create the SMTP credential from SES, it will also create an IAM user. I am not sure how is airflow running in your case (bare metal on ec2 instance or wrapped in containers), and how that user access is set up.
Related
I am completely new in usung of Elastcalert. I am trying to use Elasticalert for striking email when no log is sent to logstash from my client server. I have successfully installed Elastcalert on my master server. However, when I run elastalert-create-index I get following error:
Traceback (most recent call last):
File "/usr/bin/elastalert-create-index", line 11, in <module>
load_entry_point('elastalert==0.1.21', 'console_scripts', 'elastalert-
create-index')()
File "/usr/lib/python2.7/site-packages/elastalert-0.1.21-
py2.7.egg/elastalert/create_index.py", line 77, in main
username = args.username if args.username else data.get('es_username')
UnboundLocalError: local variable 'data' referenced before assignment
My config.yaml is as follow:
# This is the folder that contains the rule yaml files
# Any .yaml file will be loaded as a rule
rules_folder: example_rules
# How often ElastAlert will query Elasticsearch
# The unit can be anything from weeks to seconds
run_every:
minutes: 1
# ElastAlert will buffer results from the most recent
# period of time, in case some log sources are not in real time
buffer_time:
minutes: 15
# The Elasticsearch hostname for metadata writeback
# Note that every rule can have its own Elasticsearch host
es_host: localhost
# The Elasticsearch port
es_port: 9200
# The AWS region to use. Set this when using AWS-managed elasticsearch
#aws_region: us-east-1
# The AWS profile to use. Use this if you are using an aws-cli profile.
# See http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-
started.html
# for details
#profile: test
# Optional URL prefix for Elasticsearch
#es_url_prefix: elasticsearch
# Connect with TLS to Elasticsearch
#use_ssl: True
# Verify TLS certificates
#verify_certs: True
# GET request with body is the default option for Elasticsearch.
# If it fails for some reason, you can pass 'GET', 'POST' or 'source'.
# See http://elasticsearch-py.readthedocs.io/en/master/connection.html?
highlight=send_get_body_as#transport
# for details
#es_send_get_body_as: GET
# Option basic-auth username and password for Elasticsearch
#es_username:
#es_password:
# Use SSL authentication with client certificates client_cert must be
# a pem file containing both cert and key for client
#verify_certs: True
#ca_certs: /path/to/cacert.pem
#client_cert: /path/to/client_cert.pem
#client_key: /path/to/client_key.key
# The index on es_host which is used for metadata storage
# This can be a unmapped index, but it is recommended that you run
# elastalert-create-index to set a mapping
writeback_index: elastalert_status
# If an alert fails for some reason, ElastAlert will retry
# sending the alert until this time period has elapsed
alert_time_limit:
days: 2
Did you try running elastalert-create-index without any arguments? It guides you through the setup process like this:
$>elastalert-create-index
Enter Elasticsearch host: localhost
Enter Elasticsearch port: 9200
Use SSL? t/f: f
Enter optional basic-auth username (or leave blank):
Enter optional basic-auth password (or leave blank):
Enter optional Elasticsearch URL prefix (prepends a string to the URL of every request):
New index name? (Default elastalert_status)
Name of existing index to copy? (Default None)
Elastic Version:6
Mapping used for string:{'type': 'keyword'}
New index elastalert_status created
Done!
There seems to be nothing on the web about this...
How do you parametrize sentry.conf.py to use Amazon SES backend for emails?
Right now, in a Django project, we use:
EMAIL_BACKEND = 'django_ses.SESBackend'
EMAIL_USE_SSL = True
AWS_ACCESS_KEY_ID = 'key'
AWS_SECRET_ACCESS_KEY = 'secret'
AWS_SES_REGION_NAME = 'eu-west-1'
AWS_SES_REGION_ENDPOINT = 'email.eu-west-1.amazonaws.com'
Sentry is a bit different, anyone has insights?
Thanks a lot,
You can configure sentry to send emails using a SMTP Server and you can obtain SMTP credentials from SES.
To set up SES for using the SMTP interface follow this guide: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email-smtp.html
Then configure your sentry installation to use those credentials (s. https://docs.sentry.io/server/config/#mail)
Example config.yml:
mail.backend: 'smtp'
mail.host: 'email-smtp.eu-west-1.amazonaws.com'
mail.port: 587
mail.username: 'myuser'
mail.password: 'mypassword'
mail.use-tls: true
# The email address to send on behalf of
mail.from: 'sentry#example.com'
Configuration with django-amazon-ses (different from django-ses), without SMTP credentials, is:
In sentry/config.yml :
mail.backend: 'django_amazon_ses.EmailBackend'
mail.from: 'from#example.com'
# Comment or delete the other `mail.*` options
In sentry/sentry.conf.py , if region is not us-east-1:
AWS_DEFAULT_REGION = "eu-west-1"
In sentry/enhance-image.sh (since v22.6.0):
pip install django-amazon-ses
Then restart Sentry as usual with ./install.sh and docker compose up -d
PS: assuming the instance or the default profile has the IAM ses:SendEmail permission.
I have been asked to do some system admin and to move a legacy PHP web application to an Amazon EC2 instance running Debian. I have done this, and emails are successfully being sent from postfix.
Concern was expressed by the previous system admin that the server was not using an email relay, and a request to use SES seemed straight forward. I have implemented a mail relay using Mailgun from a Rackspace instance, and though not trivial, I got this done in a couple of hours.
I have not found the SES process quite so simple, and I suspect this is because I am unfamiliar with using certificates.
Initially I set up the service using the instructions here
http://docs.aws.amazon.com/ses/latest/DeveloperGuide/postfix.html
Elastic IP set up for server
Credentials created for SMTP server
Created IAM user and got a username and password for SMTP at
email-smtp.us-west-2.amazonaws.com
I created an /etc/postfix/sasl_passwd file with
[email-smtp.us-west-2.amazonaws.com]:25 USERNAME:PASSWORD
I then ran
postmap hash:/etc/postfix/sasl_passwd
to create the sasl_passwd.db
/etc/postfix/master.cf did not have smtp_fallback_relay in it
I created a certificate by installing apt-get install sasl2-bin and
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout server.key -out server.crt
and pointing postfix to this in my main.cf (at the end of this post).
I am using sendmail to send an email via Python
SENDMAIL = "/usr/sbin/sendmail" # sendmail location
FROM = "andy#travelinsurancequotes.com.au"
#TO = ["kirstie#travelinsurancequotes.com.au", "jason#slatescience.com"]
TO = ["jason#slatescience.com"]
SUBJECT = "Artog SMTP server is working!"
TEXT = "Sending emails on the TIQ webserver is working"
# Prepare actual message
message = """\
From: %s
To: %s
Subject: %s
%s
""" % (FROM, ", ".join(TO), SUBJECT, TEXT)
# Send the mail
import os
p = os.popen("%s -f %s -t -i" % (SENDMAIL, FROM), "w")
p.write(message)
status = p.close()
if status:
print "Sendmail exit status", stat
but I keep getting a time out error on sending:
Feb 26 03:18:19 lamp postfix/error[23414]: 5DE3240508: to=<jason#slatescience.com>, relay=none, delay=0.02, delays=0.02/0/0/0, dsn=4.4.1, status=deferred (delivery temporarily suspended: connect to email-smtp.us-west-2.amazonaws.com[54.187.123.10]:25: Connection timed out
I can connect via port 25
root#lamp /home/www# telnet email-smtp.us-west-2.amazonaws.com 25
Trying 54.149.142.243...
Connected to ses-smtp-us-west-2-prod-14896026.us-west-2.elb.amazonaws.com.
Escape character is '^]'.
220 email-smtp.amazonaws.com ESMTP
My main.cf file is
myhostname = travelinsurancequotes.com.au
mydomain = travelinsurancequotes.com.au
inet_interfaces = all
mynetworks_style = host
local_destination_recipient_limit = 300
local_destination_concurrency_limit = 5
recipient_delimiter=+
smtpd_banner = $myhostname
smtpd_sasl_auth_enable = yes
smtp_sasl_mechanism_filter = plain
smtpd_sasl_local_domain = $myhostname
broken_sasl_auth_clients = yes
smtpd_helo_required = yes
smtp_use_tls = yes
smtpd_use_tls = yes
smtp_tls_note_starttls_offer = yes
smtpd_tls_key_file = /etc/postfix/sslcerts/server.key
smtpd_tls_cert_file = /etc/postfix/sslcerts/server.crt
smtpd_tls_loglevel = 1
smtpd_tls_received_header = yes
smtpd_tls_session_cache_timeout = 3600s
tls_random_source = dev:/dev/urandom
relayhost = [email-smtp.us-west-2.amazonaws.com]:25
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_use_tls = yes
smtp_tls_security_level = encrypt
smtp_tls_note_starttls_offer = yes
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt
AWS EC2 has some sort of limit on mail being sent ..
I had that error, and Amazon Support told me to fill this form out to remove the limit.
https://aws.amazon.com/forms/ec2-email-limit-rdns-request
I hope this helps
Amazon has instructions for postfix and sendmail, but not OpenSMTPD, so adding them here.
Tested with OpenBSD 5.8
Verify your domain and a sender in AWS SES console. Save your SMTP Settings.
Set up the SMTP authentication details in the mail secrets database (replacing $smtpUsername:$smtpPassword with the values from step 1)
# touch /etc/mail/secrets
# chmod 640 /etc/mail/secrets
# chown root:_smtpd /etc/mail/secrets
# echo "ses $smtpUsername:$smtpPassword" >> /etc/mail/secrets
# makemap /etc/mail/secrets
Configure OpenSMTPD:
# nano /etc/mail/smtpd.conf
listen on lo0
table aliases db:/etc/mail/aliases.db
table secrets db:/etc/mail/secrets.db
accept for local alias <aliases> deliver to mbox
accept from local for any relay via tls+auth://ses#email-smtp.us-east-1.amazonaws.com auth <secrets>
Restart OpenSMTPD:
# rcctl restart smtpd
Test it:
# sendmail -v -f verified-sender#verified-domain.com to#example.com
Subject: test subject
test body
^D
Errors?
watch your line-breaks in smtpd.conf
# smtpd -n to check for syntax errors in smtpd.conf
Try port 587 if your machine is blocking port 25 (add :587 to end of aws url in smtpd.conf)
I'm using the following project for enabling APNS in my project:
https://github.com/stephenmuss/django-ios-notifications
I'm able to send and receive push notifications on my production app fine, but the sandbox apns is having strange issues which i'm not able to solve. It's constantly not connecting to the push service. When I do manually the _connect() on the APNService or FeedbackService classes, I get the following error:
File "/Users/MyUser/git/prod/django/ios_notifications/models.py", line 56, in _connect
self.connection.do_handshake()
Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure')]
I tried recreating the APN certificate a number of times and constantly get the same error. Is there anything else i'm missing?
I'm using the endpoints gateway.push.apple.com and gateway.sandbox.push.apple.com for connecting to the service. Is there anything else I should look into for this? I have read the following:
Apns php error "Failed to connect to APNS: 110 Connection timed out."
Converting PKCS#12 certificate into PEM using OpenSSL
Error Using PHP for iPhone APNS
Turns out Apple changed ssl context from SSL3 to TLSv1 in development. They will do this in Production eventually (not sure when). The following link shows my pull request which was accepted into the above project:
https://github.com/stephenmuss/django-ios-notifications/commit/879d589c032b935ab2921b099fd3286440bc174e
Basically, use OpenSSL.SSL.TLSv1_METHOD if you're using python or something similar in other languages.
Although OpenSSL.SSL.SSLv3_METHOD works in production, it may not work in the near future. OpenSSL.SSL.TLSv1_METHOD works in production and development.
UPDATE
Apple will remove SSL 3.0 support in production on October 29th, 2014 due to the poodle flaw.
https://developer.apple.com/news/?id=10222014a
I have worked on APN using python-django, for this you need three things URL, PORT and Certificate provided by Apple for authentication.
views.py
import socket, ssl, json, struct
theCertfile = '/tmp/abc.cert' ## absolute path where certificate file is placed.
ios_url = 'gateway.push.apple.com'
ios_port = 2195
deviceToken = '3234t54tgwg34g' ## ios device token to which you want to send notification
def ios_push(msg, theCertfile, ios_url, ios_port, deviceToken):
thePayLoad = {
'aps': {
'alert':msg,
'sound':'default',
'badge':0,
},
}
theHost = ( ios_url, ios_port )
data = json.dumps( thePayLoad )
deviceToken = deviceToken.replace(' ','')
byteToken = deviceToken.decode('hex') # Python 2
theFormat = '!BH32sH%ds' % len(data)
theNotification = struct.pack( theFormat, 0, 32, byteToken, len(data), data )
# Create our connection using the certfile saved locally
ssl_sock = ssl.wrap_socket( socket.socket( socket.AF_INET, socket.SOCK_STREAM ), certfile = theCertfile )
ssl_sock.connect( theHost )
# Write out our data
ssl_sock.write( theNotification )
# Close the connection -- apple would prefer that we keep
# a connection open and push data as needed.
ssl_sock.close()
Hopefully this would work for you.