I'm trying to get Discourse set up on an AWS EC2 instance, but am having problems getting the emails to send via AWS SES.
Firstly I have an email and a domain set up and confirmed (not in sandbox mode) on AWS SES and I can successfully send test emails from the AWS SES dashboard, and also manually through postfix running on the Discourse machine instance.
I have attempted to follow instructions here: http://stroupaloop.com/blog/discourse-setup-using-aws/ (though realise this is quite old now, so perhaps the config is different now) and also found Discourse SES AWS working app.yml file example please - but this config isn't working for me either.
For info, I'm editing the app.yml file by doing....
$ sudo ./launcher stop app
$ sudo nano ./containers/app.yml
[making the edits and saving]
$ sudo ./launcher bootstrap app
[it tells me it has bootstrapped correctly]
$ sudo ./launcher start app
[I can now view the discourse site, but can;t login to any accounts as the confirmation emails aren't getting sent]
Currently I have this in my app.yml file (sensitive info replaced):
DISCOURSE_SMTP_ADDRESS: email-smtp.eu-west-1.amazonaws.com
DISCOURSE_SMTP_PORT: 587
DISCOURSE_SMTP_USER_NAME: XXXXXXXXXXXXXXXX
DISCOURSE_SMTP_PASSWORD: XXXXXXXXXXXXXXXXXXXX
DISCOURSE_SMTP_ENABLE_START_TLS: true
DISCOURSE_SMTP_AUTHENTICATION: "login"
DISCOURSE_SMTP_OPENSSL_VERIFY_MODE: none
DISCOURSE_SMTP_DOMAIN: mydomain.net
DISCOURSE_SMTP_FROM_ADDRESS: me#mydomain.net
Also, in the SES sending stats dashboard, I'm not even seeing that it's trying to send the email.
So even a good starting point would be to know if there is an email log file somewhere in the Discourse docker container that I can look at to see what the issue may be.
Any help on where I'm going wrong here would be much appreciated.
I had a similar issue and I fixed it by editing app.yml and adding this line towards the end (the line is commented out by default):
- exec: rails r "SiteSetting.notification_email='info#unconfigured.discourse.org'"
You must replace info#unconfigured.discourse.org with the validated email address associated with your SES credentials. You can check your validated email address identities under AWS -> SES -> Identity Management -> Email Addresses, the Verification Status must be verified. If you managed to send and receive a test email from here, you probably have this already set up.
Once you have applied these changes, re-run the setup script to pick up the changes:
sudo ./discourse-setup
Hope this works at your end!
I had my discourse deployed in ec2 using bitnami and after trying for hours I was able to configure SES sandbox with discourse heres what I did
created SMTP credentials in the aws console
I verified two emails in aws console, since the email service was in sandbox so both sender and receiver emails have to be verified
I added smtp settings to this file /apps/discourse/htdocs/config/discourse.conf which looked like this
db_name = bitnami_discourse
db_host = /opt/bitnami/postgresql
db_port = 5432
db_pool = 25
hostname = 3.89.1xx.xx
db_username = bn_discourse
db_password = “xxxxxxxxxx”
redis_port = 6379
redis_path = /opt/bitnami/redis/var/run/redis.sock
smtp_address =“email-smtp.us-east-1.amazonaws.com”
smtp_port = 587
smptp_security = ssl
smtp_domain = 3.89.1xx.xx
smtp_user_name = ‘xxxxxxxxxxxxxxxxx’
smtp_password = ‘xxxxxxxxxxxxxxxxxxxxxxxxxxxx’
from_address = youremailaddress#example.com
The smptp username and password are the same SMTP credentials you obtain in step 1 after you configured this file make sure to restart the server by running this command outside of /apps
sudo /opt/bitnami/ctlscript.sh restart
heres a reference
Related
I checked quite a few stackoverflow questions about this and none doesn't seem to be the exact case as me and didn't really work for me so posting this question. So I'm trying to setup weblate using docker which wants me to set weblate email host user, password etc. to send mails to users when using the site, my current docker-compose.override.yml looks like this:
version: '3'
services:
weblate:
ports:
- 1111:8080
environment:
WEBLATE_EMAIL_HOST: smtp.mymailserver.com
WEBLATE_EMAIL_PORT: 465
WEBLATE_EMAIL_HOST_USER: translate#domain.com
WEBLATE_EMAIL_HOST_PASSWORD: password
WEBLATE_SERVER_EMAIL: translate#domain.com
WEBLATE_DEFAULT_FROM_EMAIL: translate#domain.com
WEBLATE_SITE_DOMAIN: translate.mydomain.com
WEBLATE_ADMIN_PASSWORD: mypass
WEBLATE_ADMIN_EMAIL: myemail#domain.com
I checked this with gmail app in mobile with the same outgoing server configuration and it worked perfectly fine there (I was able to send mails from it) but whenever I try it with weblate, I'm seeing this error:
SMTPAuthenticationError: (535, b'Authentication credentials invalid')
This is the whole error I get in the logs
You don't have SSL enabled, that might be reason for server rejecting the crendentials. Try enabling WEBLATE_EMAIL_USE_SSL.
PS: In the upcoming release, this will be turned on automatically for port 465, see https://github.com/WeblateOrg/weblate/commit/efacbf5d7e36c7207e985744639564e7edfc2fbb
I am working on a Django REST Framework web application, for that I have a Django server running in an AWS EC2 Linux box at a particular IP:PORT. There are URLs (APIs) which I can call for specific functionalities.
In Windows machine as well as in other local Linux machine (not AWS EC2) I am able to call those APIs successfully and getting the desired results perfectly.
But the problem is when I am trying to call the APIs from within the same EC2 Linux box.
A simple code I wrote to test the call of one API from the same AWS EC2 Linux box:
import requests
vURL = 'http://<ipaddress>:<port>/myapi/'
vSession = requests.Session()
vSession.headers = {'Content-Type': 'application/json', 'Accept': 'application/json'}
vResponse = vSession.get(vURL)
if vResponse.status_code == 200:
print('JSON: ', vResponse.json())
else:
print('GET Failed: ', vResponse)
vSession.close()
This script is returning GET Failed: <Response [403]>.
In one thing I am sure that there is no authentication related issues in the EC2 instance because using this same script I got actual response in other local Linux machines (not AWS EC2) and also in Windows machine.
It seems that the calling of the API (which includes the same IP:PORT of the same AWS EC2 machine) from the same machine is somehow getting restricted by either the security policies of AWS or firewall or something else.
May be I have to do some changes in setting.py. Though I have incorporated all the required settings as per my knowledge in the settings.py, like:
ALLOWED_HOST
CORS_ORIGIN_WHITELIST
Mentioning corsheaders in INSTALLED_APPS list
Mentioning corsheaders.middleware.CorsMiddleware in MIDDLEWARE list
For example, below are the CORS settings that I have incorporated in the setting.py:
CORS_ORIGIN_ALLOW_ALL = True
CORS_ALLOW_CREDENTIALS = True
CORS_ALLOW_METHODS = ('GET', 'PUT', 'POST', 'DELETE')
CORS_ORIGIN_WHITELIST = (
< All the IP address that calls this
application are listed here,
this list includes the IP of the
AWS EC2 machine also >
)
Does anyone have any ideas regarding this issue? Please help me to understand the reason of this issue and how to fix this.
Thanks in advance.
As per discussions in comment section, it's clear that inbound and outbound are fine.
So, check the proxy settings like 'no_proxy' env variable etc. in your aws linux box itself.
command to check env variables: set
As it's allowing outbound and inbound, then try to set no_proxy value with appending your IP Address to it.
Please let me know if this helped.
Thanks.
I deployed an app successfully following this link.
After deployment, I am having trouble connecting to Cloud SQL. In my IPython notebook, before I deploy my app, I can use the following statement to connect to my cloud instance using Google SDK:
cloud_sql_proxy.exe -instances="project_name:us-east1:instance_name"=tcp:3306
After entering the above, I get a notification in Google Cloud Shell
"listening on 127.0.0.1:3306 for project_name:us-east1:instance_name
ready for new connections"
I then use my IPython notebook to test the connection:
host = '127.0.0.1' (also changed to my my ip address for google cloud sql)
user = 'cloud_sql_user'
password = 'cloud_sql_password'
conn1 = pymysql.connect(host=host,
user=user,
password=password,
db='mydb')
cur1 = conn1.cursor()
Local test results: Can connect to Cloud SQL from IPython and query cloud database. Next step: deploy
gcloud app deploy
Result: App Deployed. However, upon navigating to my website and typing in names into the input field, it takes me to a new URL and I get the error:
OperationalError at /search/
(20033), "Can't connect to MySQL server on 127.0.0.1 (timed out))
My main questions are:
How can we get PyMySQL query into a cloud database after deployment?
Do I need a Gunicorn if I'm using Windows and need to connect to their cloud database?
Is SQL alchemy needed for me? I'm not using an ORM. The online instructions aren't really that clear. My local host computer is on Windows 7, Python 3 and Django.
Edit: I edited the file based on the suggestion by the user below. I still get the error 'connection timed out'
Found it. Change the socket in your pymysql to a unix_socket = "your cloud connection string name". Let host be 'localhost', user = 'your cloud username' and password = 'your cloud sql password'
edit: don't forget the /cloud/ part in the connection string name
This post is already a bit old, I hope you already solved this!
You check if you're in production like this :
if os.getenv('GAE_INSTANCE'):
In the documentation, they manage it this way :
if os.getenv('SERVER_SOFTWARE', '').startswith('Google App Engine'):
I think that because this condition is wrong, you are overwriting the DATABASES['default']['HOST'] value to '127.0.0.1'
Hope this will be the answer you were looking for!
I have a problem, I am using the SES AWS service, for the delivery of my email; this works without problems; the problem arises when I want to send forwarding mail from one mailbox to another destination account, receive the following error:
Mailfromuser_sernder#domain.tld
(ultimately generated from usermail_forwarder#domain.tld)
host ses-smtp-us-west-xxxxxxx.xxxxx.us-west-2.elb.amazonaws.com
SMTP error of the remote mail server after the end of the data:
554 Message rejected: the email address is not verified. The following identities could not be verified in the US-WEST-2 region:
Mailfromuser_sernder#domain.tld
Investigate the issue and the reason is that the SRS (Service Rewrite Scheme) is not compatible with SES AWS. reference here.
I asked Cpanel for support because I use the Cpanel on my server, not even a drop of help, they answered that it was a technical problem.
The only solution that think, is forwarder mail by any local user and size is more than 10 mb is sending by another router.
He there my problem, as I define it; in the router section use these lines and it serves only for one domain, he researched and you can read the headers of the email, but I do not know how
sender_redirect:
driver = dnslookup
domains = domain.tld
transport = remote_smtp
no more
I know it's wrong, but I do not know how to declare the functions to do nex requeriments:
That, if is forwarding mail by any mailbox, I sent it by this
router and not by amazon ses
The message is greater than 10 mb, sending by this router and not by
amazon ses
I have setup redmine and configure email sending feature with smtp.
It takes around 15 seconds for any user action to complete if email is enabled( sending email takes time at our smtp server as it has a configured delay).
So I have tried using async_smtp as shown below.
production:
delivery_method: :async_smtp
async_smtp_settings:
enable_starttls_auto: true
address: "smtp.xxx.com"
port: 25
domain: "smtp.xx.com"
authentication: :plain
user_name: "yyy#xxx.com"
password: "xxx!"
Redmine shows that email is sent, but I couldn't see the email. Log also doesn't show any error.
Can someone help?
Adding more detail below:
with the above settings, I get success log as shown below:
Sent email "Redmine test" (16ms) to: [email]
Redirected to http://[ip]/redmine/settings?tab=notifications Completed 302 Found in
328ms (ActiveRecord: 0.0ms) Started GET
"/redmine/settings?tab=notifications" for [ip] at 2015-10-05 15:13:04+0530
note: I have replaced ip and email with [ip] and [email]
I made it work!
I found that you need to add extra level in the config file, email_delivery. Its hinted at elsewhere in the file, but all the examples on redmine.org miss it out. Seems that only async requires it.
production:
email_delivery:
delivery_method: :async_smtp
async_smtp_settings:
address: ...
It's look like it's the same problem I had, your smtp server can't reroute your request. I had to let my configuration to smtp instead of async_smtp as our smtp server can't handle it.
Is it your own smtp server or a hotmail, gmail, etc?
BTW can you try to add this config.action_mailer.logger = nil to your config/environments/production.rb configuration file, and give us the output of log/production.log
EDIT: Look like redmine have problem handling certain SSL certificate.