I am trying to run the aws-runas to use AWS STS to do some development work using a specific user role. I already have access the required userrole access to the AWS account.
the command executed to run the aws sts
aws-runas.exe <profile_name>
[default]
output = json
region = <example1>
saml_auth_url = <url>
saml_username = <email>
saml_provider = <provider>
federated_username = <username>
credentials_duration = 8h
[profile <name>]
role_arn = arn:aws:iam::<account_id>:role/<role_name>
source_profile = <profile_name>
This is how I formed the config file which is located at C:\Users<NAME>.aws
Debug output is as the follows.
Does anyone have a clue of what's wrong in here ?
I am using AWS SAM to test my Lambda functions in the AWS cloud.
This is my code for testing Lambda:
# Set "running_locally" flag if you are running the integration test locally
running_locally = True
def test_data_extraction_validate():
if running_locally:
lambda_client = boto3.client(
"lambda",
region_name="eu-west-1",
endpoint_url="http://127.0.0.1:3001",
use_ssl=False,
verify=False,
config=botocore.client.Config(
signature_version=botocore.UNSIGNED,
read_timeout=10,
retries={'max_attempts': 1}
)
)
else:
lambda_client = boto3.client('lambda',region_name="eu-west-1")
####################################################
# Test 1. Correct payload
####################################################
with open("payloads/myfunction/ok.json","r") as f:
payload = f.read()
# Correct payload
response = lambda_client.invoke(
FunctionName="myfunction",
Payload=payload
)
result = json.loads(response['Payload'].read())
assert result['status'] == True
assert result['error'] == ""
This is the command I am using to start AWS SAM locally:
sam local start-lambda -t template.yaml --debug --region eu-west-1
Whenever I run the code, I get the following error:
botocore.exceptions.ClientError: An error occurred (ResourceNotFound) when calling the Invoke operation: Function not found: arn:aws:lambda:us-west-2:012345678901:function:myfunction
I don't understand why it's trying to invoke function located in us-west-2 when I explicitly told the code to use eu-west-1 region. I also tried to use AWS Profile with hardcoded region - the same error.
When I switch the running_flag to False and run the code without AWS SAM everything works fine.
===== Updated =====
The list of env variables:
# env | grep 'AWS'
AWS_PROFILE=production
My AWS configuration file:
# cat /Users/alexey/.aws/config
[profile production]
region = eu-west-1
My AWS Credentials file
# cat /Users/alexey/.aws/credentials
[production]
aws_access_key_id = <my_access_key>
aws_secret_access_key = <my_secret_key>
region=eu-west-1
Make sure you are actually running the correct local endpoint! In my case the problem was that I had started the lambda client with an incorrect configuration previously, and so my invocation was not invoking what I thought it was. Try killing the process on the port you have specified: kill $(lsof -ti:3001), run again and see if that helps!
This also assumes that you have built the function FunctionName="myfunction" correctly (make sure your function is spelt correctly in the template file you use during sam build)
I'm using OJS 2.4.8.1, therefore, I want to use the AWS SES SMTP Service, so, I've configured the config file config.inc.php as follow:
; Use SMTP for sending mail instead of mail()
smtp = On
; SMTP server settings
smtp_server = email-smtp.us-east-1.amazonaws.com
smtp_port = 587
; Force the default envelope sender (if present)
; force_default_envelope_sender = On
; Enable SMTP authentication
; Supported mechanisms: PLAIN, LOGIN, CRAM-MD5, and DIGEST-MD5
smtp_auth = PLAIN
smtp_username = XXXXXXXXXX
smtp_password = XXXXXXXXXX/XXXXXXXXX
; Allow envelope sender to be specified
; allow_envelope_sender = On
After sending a test I can view in error_log the following:
OJS SMTPMailer: Could not authenticate
NOTES: the username and password are correct, I tried adding the params with/without quotes, I tried with 25, 465 or 587 ports and nothing works.
Any help is well received.
I am trying to have Airflow email me using AWS SES whenever a task in my DAG fails to run or retries to run. I am using my AWS SES credentials rather than my general AWS credentials too.
My current airflow.cfg
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = emailsmtpserver.region.amazonaws.com
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
smtp_user = REMOVEDAWSACCESSKEY
smtp_password = REMOVEDAWSSECRETACCESSKEY
smtp_port = 25
smtp_mail_from = myemail#myjob.com
Current task in my DAG that is designed to intentionally fail and retry:
testfaildag_library_install_jar_jdbc = PythonOperator(
task_id='library_install_jar',
retries=3,
retry_delay=timedelta(seconds=15),
python_callable=add_library_to_cluster,
params={'_task_id': 'cluster_create', '_cluster_name': CLUSTER_NAME, '_library_path':s3000://fakepath.jar},
dag=dag,
email_on_failure=True,
email_on_retry=True,
email=’myname#myjob.com’,
provide_context=True
)
Everything works as designed as the task retries the set number of times and ultimately fails, except no emails are being sent. I have checked the logs in the task mentioned above too, and smtp is never mentioned.
I've looked at the similar question here, but the only solution there did not work for me. Additionally, Airflow's documentation such as their example here does not seem to work for me either.
Does SES work with Airflow's email_on_failure and email_on_retry functions?
What I am currently thinking of doing is using the on_failure_callback function to call a python script provided by AWS here to send an email on failure, but that is not the preferable route at this point.
Thank you, appreciate any help.
--updated 6/8 with working SES
here's my write up on how we got it all working. There is a small summary at the bottom of this answer.
Couple of big points:
We decided not to use Amazon SES, and rather use sendmail We now have SES up and working.
It is the airflow worker that services the email_on_failure and email_on_retry features. You can do journalctl –u airflow-worker –f to monitor it during a Dag run. On your production server, you do NOT need to restart your airflow-worker after changing your airflow.cfg with new smtp settings - it should be automatically picked up. No need to worry about messing up currently running Dags.
Here is the technical write-up on how to use sendmail:
Since we changed from ses to sendmail on localhost, we had to change our smtp settings in the airflow.cfg.
The new config is:
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = localhost
smtp_starttls = False
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
#smtp_user = not used
#smtp_password = not used
smtp_port = 25
smtp_mail_from = myjob#mywork.com
This works in both production and local airflow instances.
Some common errors one might receive if their config is not like mine above:
socket.error: [Errno 111] Connection refused -- you must change your smtp_host line in airflow.cfg to localhost
smtplib.SMTPException: STARTTLS extension not supported by server. -- you must change your smtp_starttls in airflow.cfg to False
In my local testing, I tried to simply force airflow to show a log of what was going on when it tried to send an email – I created a fake dag as follows:
# Airflow imports
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.operators.bash_operator import BashOperator
from airflow.operators.dummy_operator import DummyOperator
# General imports
from datetime import datetime,timedelta
def throwerror():
raise ValueError("Failure")
SPARK_V_2_2_1 = '3.5.x-scala2.11'
args = {
'owner': ‘me’,
'email': ['me#myjob'],
'depends_on_past': False,
'start_date': datetime(2018, 5,24),
'end_date':datetime(2018,6,28)
}
dag = DAG(
dag_id='testemaildag',
default_args=args,
catchup=False,
schedule_interval="* 18 * * *"
)
t1 = DummyOperator(
task_id='extract_data',
dag=dag
)
t2 = PythonOperator(
task_id='fail_task',
dag=dag,
python_callable=throwerror
)
t2.set_upstream(t1)
If you do the journalctl -u airflow-worker -f, you can see that the worker says that it has sent an alert email on the failure to the email in your DAG, but we were still not receiving the email. We then decided to look into the mail logs of sendmail by doing cat /var/log/maillog. We saw a log like this:
Jun 5 14:10:25 production-server-ip-range postfix/smtpd[port]: connect from localhost[127.0.0.1]
Jun 5 14:10:25 production-server-ip-range postfix/smtpd[port]: ID: client=localhost[127.0.0.1]
Jun 5 14:10:25 production-server-ip-range postfix/cleanup[port]: ID: message-id=<randomMessageID#production-server-ip-range-ec2-instance>
Jun 5 14:10:25 production-server-ip-range postfix/smtpd[port]: disconnect from localhost[127.0.0.1]
Jun 5 14:10:25 production-server-ip-range postfix/qmgr[port]: MESSAGEID: from=<myjob#mycompany.com>, size=1297, nrcpt=1 (queue active)
Jun 5 14:10:55 production-server-ip-range postfix/smtp[port]: connect to aspmx.l.google.com[smtp-ip-range]:25: Connection timed out
Jun 5 14:11:25 production-server-ip-range postfix/smtp[port]: connect to alt1.aspmx.l.google.com[smtp-ip-range]:25: Connection timed out
So this is probably the biggest "Oh duh" moment. Here we are able to see what is actually going on in our smtp service. We used telnet to confirm that we were not able to connect to the targeted IP ranges from gmail.
We determined that the email was attempting to be sent, but that the sendmail service was unable to connect to the ip ranges successfully.
We decided to allow all outbound traffic on port 25 in AWS (as our airflow production environment is an ec2 instance), and it now works successfully. We are now able to receive emails on failures and retries (tip: email_on_failure and email_on_retry are defaulted as True in your DAG API Reference - you do not need to put it into your args if you do not want to, but it is still good practice to explicitly state True or False in it).
SES now works. Here is the airflow config:
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = emailsmtpserver.region.amazonaws.com
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
smtp_user = REMOVEDAWSACCESSKEY
smtp_password = REMOVEDAWSSECRETACCESSKEY
smtp_port = 587
smtp_mail_from = myemail#myjob.com (Verified SES email)
Thanks!
Similar case here, I tried to follow the same debugging process but got no log output. Also, the outbound rule for my airflow ec2 instance is open to all ports and ips, so it should be some other causes.
I noticed that when you create the SMTP credential from SES, it will also create an IAM user. I am not sure how is airflow running in your case (bare metal on ec2 instance or wrapped in containers), and how that user access is set up.
Amazon has instructions for postfix and sendmail, but not OpenSMTPD, so adding them here.
Tested with OpenBSD 5.8
Verify your domain and a sender in AWS SES console. Save your SMTP Settings.
Set up the SMTP authentication details in the mail secrets database (replacing $smtpUsername:$smtpPassword with the values from step 1)
# touch /etc/mail/secrets
# chmod 640 /etc/mail/secrets
# chown root:_smtpd /etc/mail/secrets
# echo "ses $smtpUsername:$smtpPassword" >> /etc/mail/secrets
# makemap /etc/mail/secrets
Configure OpenSMTPD:
# nano /etc/mail/smtpd.conf
listen on lo0
table aliases db:/etc/mail/aliases.db
table secrets db:/etc/mail/secrets.db
accept for local alias <aliases> deliver to mbox
accept from local for any relay via tls+auth://ses#email-smtp.us-east-1.amazonaws.com auth <secrets>
Restart OpenSMTPD:
# rcctl restart smtpd
Test it:
# sendmail -v -f verified-sender#verified-domain.com to#example.com
Subject: test subject
test body
^D
Errors?
watch your line-breaks in smtpd.conf
# smtpd -n to check for syntax errors in smtpd.conf
Try port 587 if your machine is blocking port 25 (add :587 to end of aws url in smtpd.conf)