SMTP on AWS private subnet EC2, Network unreachable [errorno 101] - amazon-web-services

Trying to send emails using SMTPlib in python script on private subnet EC2 machine using following code. EC2 machine has communication with internal SMTP server through PORT 25 , verified using telnet command.
This code works fine from public subnet EC2 but throws error mentioned at the bottom on private subnet.
import smtplib
from email.MIMEMultipart import MIMEMultipart #python 2
msg = MIMEMultipart()
msg['From'] = 'myid#domain.com'
msg['To'] = 'youid#domain.com'
msg['Subject'] = 'simple email in python'
message = 'here is the email'
mailserver = smtplib.SMTP('smtp.gmail.com',25)
mailserver.ehlo()
mailserver.starttls()
mailserver.ehlo()
mailserver.login('myid#domain.com', 'password')
mailserver.sendmail('myid#domain.com','youid#domain.com',msg.as_string())
mailserver.quit()
Getting this error socket.error: errorno[101] - Network is unreachable

do you have a NAT gateway in the private subnet where your smtp is located?
do you have active access control layer on that private subnet where your server is located? Is it blocking anything?
also check ACL rules in public subnet
security group attached to the server, is it open?

Email server configuration was wrong and the email server did not require login.
mailserver = smtplib.SMTP(internal office server,25)
#mailserver.login('myid#domain.com', 'password') -- Not required for the server
Thank you.

Related

Cannot send SMTP email from JBoss java app in Amazon EC2 instance: Could not convert socket to TLS

I'm running a java app in JBoss 6.4.0 in an Amazon Web Services red hat 8 EC2 instance.
When my app tries to send an email via javax.mail I'm getting an error "Could not convert socket to TLS".
I then coded up the AmazonSESSample.java sample program and tried it. I ran it in my EC2 instance outside JBoss and it ran successfully. (The AmazonSESSample program can be found here: https://docs.aws.amazon.com/ses/latest/DeveloperGuide/examples-send-using-smtp.html)
Then I commented out the email code in my java app, and replaced it with the code in AmazonSESSample.java. When I run my java app with the AmazonSESSample code in JBoss I get the same error: "Could not convert socket to TLS". So the AmazonSESSample works fine outside JBoss, and gives an error when running inside JBoss.
Here is the AmazonSESSample code in my app. Can somebody help me fix the "Could not convert socket to TLS" error?:
public class AmazonSESSample {
private static final Logger logger = LogManager.getFormatterLogger("AmazonSESSample");
// Replace sender#example.com with your "From" address.
// This address must be verified.
static final String FROM = "email1#gmail.com";
static final String FROMNAME = "Steve";
// Replace recipient#example.com with a "To" address. If your account
// is still in the sandbox, this address must be verified.
static final String TO = "email2#gmail.com";
// Replace smtp_username with your Amazon SES SMTP user name.
static final String SMTP_USERNAME = "thisIsNotActualghijikl";
// Replace smtp_password with your Amazon SES SMTP password.
static final String SMTP_PASSWORD = "abcdefThisIsNotActual";
// Amazon SES SMTP host name. This example uses the US West (Oregon) region.
// See https://docs.aws.amazon.com/ses/latest/DeveloperGuide/regions.html#region-endpoints
// for more information.
static final String HOST = "email-smtp.us-east-2.amazonaws.com";
// The port you will connect to on the Amazon SES SMTP endpoint.
static final int PORT = 587;
static final String SUBJECT = "Amazon SES test (SMTP interface accessed using Java)";
static final String BODY = String.join(
System.getProperty("line.separator"),
"<h1>Amazon SES SMTP Email Test</h1>",
"<p>This email was sent with Amazon SES using the ",
"<a href='https://github.com/javaee/javamail'>Javamail Package</a>",
" for <a href='https://www.java.com'>Java</a>."
);
public int sendEmail(DisplayEmailMessage emailMessage) throws UnsupportedEncodingException, MessagingException {
// Create a Properties object to contain connection configuration information.
Properties props = System.getProperties();
props.put("mail.transport.protocol", "smtp");
props.put("mail.smtp.port", PORT);
props.put("mail.smtp.starttls.enable", "true");
props.put("mail.smtp.auth", "true");
// Create a Session object to represent a mail session with the specified properties.
Session session = Session.getDefaultInstance(props);
// Create a message with the specified information.
MimeMessage msg = new MimeMessage(session);
msg.setFrom(new InternetAddress(FROM, FROMNAME));
msg.setRecipient(Message.RecipientType.TO, new InternetAddress(TO));
msg.setSubject(SUBJECT);
msg.setContent(BODY, "text/html");
// Create a transport.
Transport transport = session.getTransport();
// Send the message.
try {
System.out.println("Sending...");
// Connect to Amazon SES using the SMTP username and password you specified above.
transport.connect(HOST, SMTP_USERNAME, SMTP_PASSWORD);
// Send the email.
transport.sendMessage(msg, msg.getAllRecipients());
System.out.println("Email sent!");
}
catch (Exception ex) {
System.out.println("The email was not sent.");
System.out.println("Error message: " + ex.getMessage());
}
finally {
// Close and terminate the connection.
transport.close();
}
return 0;
}
}
Here is the javamail debug output:
DEBUG: setDebug: JavaMail version 1.4.5.redhat-2
Sending email to 123#gmail.com
DEBUG: getProvider() returning javax.mail.Provider[TRANSPORT,smtp,com.sun.mail.smtp.SMTPTransport,Sun Microsystems, Inc]
Starting to connect at Sun Dec 26 13:14:23 UTC 2021 to email 123#gmail.com
DEBUG SMTP: useEhlo true, useAuth true
DEBUG SMTP: trying to connect to host "smtp.dreamhost.com", port 587, isSSL false
220 pdx1-sub0-mail-a290.dreamhost.com ESMTP
DEBUG SMTP: connected to host "smtp.dreamhost.com", port: 587
EHLO ip-172-31-29-30.us-east-2.compute.internal
250-pdx1-sub0-mail-a290.dreamhost.com
250-PIPELINING
250-SIZE 40960000
250-ETRN
250-STARTTLS
250-AUTH PLAIN LOGIN
250-AUTH=PLAIN LOGIN
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 CHUNKING
DEBUG SMTP: Found extension "PIPELINING", arg ""
DEBUG SMTP: Found extension "SIZE", arg "40960000"
DEBUG SMTP: Found extension "ETRN", arg ""
DEBUG SMTP: Found extension "STARTTLS", arg ""
DEBUG SMTP: Found extension "AUTH", arg "PLAIN LOGIN"
DEBUG SMTP: Found extension "AUTH=PLAIN", arg "LOGIN"
DEBUG SMTP: Found extension "ENHANCEDSTATUSCODES", arg ""
DEBUG SMTP: Found extension "8BITMIME", arg ""
DEBUG SMTP: Found extension "CHUNKING", arg ""
STARTTLS
220 2.0.0 Ready to start TLS
MessagingException
javax.mail.MessagingException: Could not convert socket to TLS
I fixed this by upgrading my JBoss to 7.4.0.

Can't connect SFTP(AWS EC2) with QuotaGuard Static IP

I am using QuotaGuard Static Addon on Heroku to access the SFTP server(AWS EC2) which has whitelisted IP.
I have tried to connect with a private key file.
This is my code.
def connect
puts "started"
Net::SSH.start(ENV["HOST"], ENV["USER"],
{
:key_data => [ ENV["FTP_KEY"] ],
:keys => [],
:keys_only => true,
:verbose => :debug,
:proxy => proxy
}
) do |ssh|
ssh.sftp.connect do |sftp|
sftp.dir.foreach("/") do |entry|
puts entry.longname
end
end
end
puts "done"
end
def quotaguard
URI(ENV["QUOTAGUARDSTATIC_URL"])
end
def proxy
Net::SSH::Proxy::HTTP.new(quotaguard.host,quotaguard.port, :user => quotaguard.user,:password=> quotaguard.password)
end
But it is failed to connect with this error.
WARN: Net::SSH::Proxy::ConnectError: {:version=>"HTTP/1.1", :code=>502, :reason=>"Bad Gateway", :headers=>{}, :body=>nil}
HOST, USER, FTP_KEY, and QUOTAGUARDSTATIC_URL are Heroku Env variables.
My thought:
I think to connect AWS EC2 using the proxy, maybe some settings need to be configured to allow proxy on AWS EC2.
But not sure.
There was the wrong Security Group on AWS EC2.
I updated and I can connect now.

Access Kafka Cluster Outside GCP

I'm currently trying to access the kafka cluster(bitnami) from my local machine, however the problem is that even after exposing the required host and ports in server.properties and adding firewall rules to allow 9092 port it just doesn't connect.
I'm running 2 broker and 1 zookeeper configuration.
Expected Output: Producer.bootstrap_connected() should return True.
Actual Output: False
server.properties
listeners=SASL_PLAINTEXT://:9092
advertised.listeners=SASL_PLAINTEXT://gcp-cluster-name:9092
sasl.mechanism.inter.broker.protocol=PLAIN`
sasl.enabled.mechanisms=PLAIN
security.inter.broker.protocol=SASL_PLAINTEXT
Consumer.py
from kafka import KafkaConsumer
import json
sasl_mechanism = 'PLAIN'
security_protocol = 'SASL_PLAINTEXT'
# Create a new context using system defaults, disable all but TLS1.2
context = ssl.create_default_context()
context.options &= ssl.OP_NO_TLSv1
context.options &= ssl.OP_NO_TLSv1_1
consumer = KafkaConsumer('organic-sense',
bootstrap_servers='<server-ip>:9092',
value_deserializer=lambda x: json.loads(x.decode('utf-8')),
ssl_context=context,
sasl_plain_username='user',
sasl_plain_password='<password>',
sasl_mechanism=sasl_mechanism,
security_protocol = security_protocol,
)
print(consumer.bootstrap_connected())
for data in consumer:
print(data)

Email on failure using AWS SES in Apache Airflow DAG

I am trying to have Airflow email me using AWS SES whenever a task in my DAG fails to run or retries to run. I am using my AWS SES credentials rather than my general AWS credentials too.
My current airflow.cfg
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = emailsmtpserver.region.amazonaws.com
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
smtp_user = REMOVEDAWSACCESSKEY
smtp_password = REMOVEDAWSSECRETACCESSKEY
smtp_port = 25
smtp_mail_from = myemail#myjob.com
Current task in my DAG that is designed to intentionally fail and retry:
testfaildag_library_install_jar_jdbc = PythonOperator(
task_id='library_install_jar',
retries=3,
retry_delay=timedelta(seconds=15),
python_callable=add_library_to_cluster,
params={'_task_id': 'cluster_create', '_cluster_name': CLUSTER_NAME, '_library_path':s3000://fakepath.jar},
dag=dag,
email_on_failure=True,
email_on_retry=True,
email=’myname#myjob.com’,
provide_context=True
)
Everything works as designed as the task retries the set number of times and ultimately fails, except no emails are being sent. I have checked the logs in the task mentioned above too, and smtp is never mentioned.
I've looked at the similar question here, but the only solution there did not work for me. Additionally, Airflow's documentation such as their example here does not seem to work for me either.
Does SES work with Airflow's email_on_failure and email_on_retry functions?
What I am currently thinking of doing is using the on_failure_callback function to call a python script provided by AWS here to send an email on failure, but that is not the preferable route at this point.
Thank you, appreciate any help.
--updated 6/8 with working SES
here's my write up on how we got it all working. There is a small summary at the bottom of this answer.
Couple of big points:
We decided not to use Amazon SES, and rather use sendmail We now have SES up and working.
It is the airflow worker that services the email_on_failure and email_on_retry features. You can do journalctl –u airflow-worker –f to monitor it during a Dag run. On your production server, you do NOT need to restart your airflow-worker after changing your airflow.cfg with new smtp settings - it should be automatically picked up. No need to worry about messing up currently running Dags.
Here is the technical write-up on how to use sendmail:
Since we changed from ses to sendmail on localhost, we had to change our smtp settings in the airflow.cfg.
The new config is:
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = localhost
smtp_starttls = False
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
#smtp_user = not used
#smtp_password = not used
smtp_port = 25
smtp_mail_from = myjob#mywork.com
This works in both production and local airflow instances.
Some common errors one might receive if their config is not like mine above:
socket.error: [Errno 111] Connection refused -- you must change your smtp_host line in airflow.cfg to localhost
smtplib.SMTPException: STARTTLS extension not supported by server. -- you must change your smtp_starttls in airflow.cfg to False
In my local testing, I tried to simply force airflow to show a log of what was going on when it tried to send an email – I created a fake dag as follows:
# Airflow imports
from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.operators.bash_operator import BashOperator
from airflow.operators.dummy_operator import DummyOperator
# General imports
from datetime import datetime,timedelta
def throwerror():
raise ValueError("Failure")
SPARK_V_2_2_1 = '3.5.x-scala2.11'
args = {
'owner': ‘me’,
'email': ['me#myjob'],
'depends_on_past': False,
'start_date': datetime(2018, 5,24),
'end_date':datetime(2018,6,28)
}
dag = DAG(
dag_id='testemaildag',
default_args=args,
catchup=False,
schedule_interval="* 18 * * *"
)
t1 = DummyOperator(
task_id='extract_data',
dag=dag
)
t2 = PythonOperator(
task_id='fail_task',
dag=dag,
python_callable=throwerror
)
t2.set_upstream(t1)
If you do the journalctl -u airflow-worker -f, you can see that the worker says that it has sent an alert email on the failure to the email in your DAG, but we were still not receiving the email. We then decided to look into the mail logs of sendmail by doing cat /var/log/maillog. We saw a log like this:
Jun 5 14:10:25 production-server-ip-range postfix/smtpd[port]: connect from localhost[127.0.0.1]
Jun 5 14:10:25 production-server-ip-range postfix/smtpd[port]: ID: client=localhost[127.0.0.1]
Jun 5 14:10:25 production-server-ip-range postfix/cleanup[port]: ID: message-id=<randomMessageID#production-server-ip-range-ec2-instance>
Jun 5 14:10:25 production-server-ip-range postfix/smtpd[port]: disconnect from localhost[127.0.0.1]
Jun 5 14:10:25 production-server-ip-range postfix/qmgr[port]: MESSAGEID: from=<myjob#mycompany.com>, size=1297, nrcpt=1 (queue active)
Jun 5 14:10:55 production-server-ip-range postfix/smtp[port]: connect to aspmx.l.google.com[smtp-ip-range]:25: Connection timed out
Jun 5 14:11:25 production-server-ip-range postfix/smtp[port]: connect to alt1.aspmx.l.google.com[smtp-ip-range]:25: Connection timed out
So this is probably the biggest "Oh duh" moment. Here we are able to see what is actually going on in our smtp service. We used telnet to confirm that we were not able to connect to the targeted IP ranges from gmail.
We determined that the email was attempting to be sent, but that the sendmail service was unable to connect to the ip ranges successfully.
We decided to allow all outbound traffic on port 25 in AWS (as our airflow production environment is an ec2 instance), and it now works successfully. We are now able to receive emails on failures and retries (tip: email_on_failure and email_on_retry are defaulted as True in your DAG API Reference - you do not need to put it into your args if you do not want to, but it is still good practice to explicitly state True or False in it).
SES now works. Here is the airflow config:
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = emailsmtpserver.region.amazonaws.com
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
smtp_user = REMOVEDAWSACCESSKEY
smtp_password = REMOVEDAWSSECRETACCESSKEY
smtp_port = 587
smtp_mail_from = myemail#myjob.com (Verified SES email)
Thanks!
Similar case here, I tried to follow the same debugging process but got no log output. Also, the outbound rule for my airflow ec2 instance is open to all ports and ips, so it should be some other causes.
I noticed that when you create the SMTP credential from SES, it will also create an IAM user. I am not sure how is airflow running in your case (bare metal on ec2 instance or wrapped in containers), and how that user access is set up.

Creating Kubernetes TLS assets before i know public and private IP

Following https://coreos.com/kubernetes/docs/latest/getting-started.html , i wanted to generate my TLS assets for my kubernetes cluster.
My plan to push those keys via cloud-config to the aws-api to create EC2 instances won't work, because i won't know the public and private IPs of those instances in advance.
I though about moving the ca cert to the instances via the cloud-config, where i then, generate those assets from a script run by a systemd unit file. Biggest concern here is that i don't want to put a ca root cert into a cloud config.
Does anyone have a solution to this situation?
According to how kube-aws does it, I can set my api-server conf like this:
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = #alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = kubernetes.mydomain.de
IP.1 = 10.3.0.1
to the "minimal config file" i added
My public DNS DNS.5 = kubernetes.mydomain.de
I omit the MASTER_HOST IP address because I can instead use the FQDN (kubernetes.mydomain.de) to get to that IP
The "K8S_SERVICE_IP", which should be the first IP of my internal IP range (10.3.0.0/24): IP.2 = 10.3.0.1
The worker conf looks like this:
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = #alt_names
[alt_names]
DNS.1 = *.*.cluster.internal
The trick here is to set the SAN as a wildcard *.*.cluster.internal. This way all the workers verify with that cert on the internal network and I don't have to set the specific IP address.