I'm using Google Cloud Engine, Bitnami, and Mailgun to set up a Mediawiki site (v1.33.1-1 on Debian 9). I'm very new to every one of things.
My Mailgun is properly set up and verified, and I'm following the documentation provided here: https://cloud.google.com/compute/docs/tutorials/sending-mail/using-mailgun
When I run:
echo 'Test passed.' | mail -s 'Test-Email' EMAIL#EXAMPLE.COM
And then:
tail -n 5 /var/log/syslog
These are my results:
root#bitnami-mediawiki-860c:~# tail -n 5 /var/log/syslog
Nov 15 03:58:39 bitnami-mediawiki-860c postfix/qmgr[13119]: 8E84FA13DA: from=<>, size=2918, nrcpt=1 (queue active)
Nov 15 03:58:39 bitnami-mediawiki-860c postfix/bounce[13144]: 7A557A13D9: sender non-delivery notification: 8E84FA13DA
Nov 15 03:58:39 bitnami-mediawiki-860c postfix/qmgr[13119]: 7A557A13D9: removed
Nov 15 03:58:39 bitnami-mediawiki-860c postfix/smtp[13142]: 8E84FA13DA: to=<root#bitnami-mediawiki-860c>, relay=none, delay=0.01, delays=0.01/0/0/0, dsn=
5.4.4, status=bounced (Host or domain name not found. Name service error for name=bitnami-mediawiki-860c type=AAAA: Host not found)
Nov 15 03:58:39 bitnami-mediawiki-860c postfix/qmgr[13119]: 8E84FA13DA: removed
Can anyone tell me how to fix this? Be specific if you can, as I'm beginning from nearly zero prior knowledge.
Google Cloud by default has always blocked the port 25, however, you can use different ports, i.e. 587 and 465.
Those ports should work to send mails from an VM instance, which could be the root cause for this to not being working as expected. It should work as mentioned on the comments with the port 2525.
Related
I'm trying to figure out why my HTTPS sites go down everytime my server's DHCP lease gets renewed.
It happens consistently, but HTTP sites continue to work just fine.
Restarting systemd-networkd brings the sites back, but until that happens the HTTPS sites are basically unreachable.
Any tips on where to look first?
The weird thing is these sites come back after the next DHCP lease renewal, then I lose connectivity on the next one, then it comes back, then I lose it, on and on.
This is what I see in syslog when it happens.
Apr 13 18:06:25 www-1 systemd-networkd[13973]: ens4: DHCP lease lost
Apr 13 18:06:25 www-1 systemd-networkd[13973]: ens4: DHCPv4 address 10.138.0.29/32 via 10.138.0.1
Apr 13 18:06:25 www-1 systemd-networkd[13973]: ens4: IPv6 successfully enabled
Apr 13 18:06:25 www-1 dbus-daemon[579]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.231' (uid=101 pid=13973 comm="/lib/systemd/systemd-networkd " label="unconfined")
Apr 13 18:06:25 www-1 systemd-networkd[13973]: ens4: Configured
Apr 13 18:06:25 www-1 systemd[1]: Starting Hostname Service...
Apr 13 18:06:25 www-1 dbus-daemon[579]: [system] Successfully activated service 'org.freedesktop.hostname1'
Apr 13 18:06:25 www-1 systemd[1]: Started Hostname Service.
Apr 13 18:06:25 www-1 systemd-hostnamed[17589]: Changed host name to 'www-1.us-west1-b.c.camp-fire-259800.internal'
This issue seems to be related to the following:
https://moss.sh/name-resolution-issue-systemd-resolved/
and
https://github.com/systemd/systemd/issues/9243
I've disabled systemd-resolved and am using a static /etc/resolv.conf copied from /run/systemd/resolve/resolv.conf
For internal DNS I'm using a private Google DNS Zone.
Thanks.
I am trying to update certs on my servers with dehydrated and dehydrated-route53-hook-script.
Here is the complete command and error:
./xsys renewcerts
Running: cd certificates && ./dehydrated --cron
# INFO: Using main config file ..config/certificates/config
Processing mydomain.org with alternative names: dev-mydomain.org
+ Checking domain name(s) of existing cert... unchanged.
+ Checking expire date of existing cert...
+ Valid till Apr 21 11:47:17 2019 GMT (Less than 30 days). Renewing!
+ Signing domains...
+ Generating private key...
+ Generating signing request...
+ Requesting new certificate order from CA...
+ Received 2 authorizations URLs from the CA
+ Handling authorization for dev-mydomain.org
+ Handling authorization for mydomain.org
+ 2 pending challenge(s)
+ Deploying challenge tokens...
Error: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Could not find zone for dev-mydomain.org
Running: cd certificates && ./dehydrated --cleanup
Looks like the aws credentials are failing, but from everything I can tell those are OK. I last ran this ~60 days ago and it ran fine then and (as far as I know) nothing has changed.
Any ideas on where to look for a fix is appreciated.
Update
I found that this command is failing:
$cli53 list
Error: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
So the root issue seems to be cli53. I have credentials in ~/.aws/credentials per docs.
This ended up being an issue with cli53. I had a symlink as follows...
ls -la .aws/
total 0
drwxr-xr-x 3 myuser staff 96 Apr 5 15:33 .
drwxr-xr-x+ 143 myuser staff 4576 Apr 8 12:30 ..
lrwxr-xr-x 1 myuser staff 69 Apr 5 15:33 credentials -> /Users/myuser/ansible/myapp/_secrets/aws_credentials
...but I had recently changed this path to:
/Users/myuser/apps/myapp/_secrets/aws_credentials so it was simply a failure of cli53 being able to find the appropriate credentials.
I managed to follow all the steps listed here to setup the aws scripts to pick up the memory usage in the system and report it to cloudwatch. The problem i'm having is that it is not getting picked up in the Cloudwatch console.
When I do
$ ~/aws-scripts-mon/mon-put-instance-data.pl --mem-util --verbose
The metric gets successfully sent to Cloudwatch. I pick it up in the console
But when i try to do the same through a cron job, it doesnt get picked up in the Cloudwatch console.
To setup the cron , i did
$ sudo crontab -e
and added this line
*/5 * * * * ~/aws-scripts-mon/mon-put-instance-data.pl --mem-util --from-cron
saved and exited. When i check the /var/log/syslog, it says that the metric was successfully sent, but for some reason, i dont catch it in the cloudwatch console. What am i missing here ?
The syslog is below for reference (with ip masked)
Jan 18 22:55:01 ip-xxx-xx-xx-xx CRON[22536]: (root) CMD (~/aws-scripts-mon/mon-put-instance-data.pl --mem-util --from-cron)
Jan 18 22:55:01 ip-xxx-xx-xx-xx postfix/pickup[22530]: 7FF494449A: uid=0 from=<root>
Jan 18 22:55:01 ip-xxx-xx-xx-xx postfix/cleanup[22540]: 7FF494449A: message-id=<20170118225501.7FF494449A#ip-xxx-xx-xx-xx.localdomain>
Jan 18 22:55:01 ip-xxx-xx-xx-xx postfix/qmgr[21671]: 7FF494449A: from=<root#ip-xxx-xx-xx-xx.localdomain>, size=673, nrcpt=1 (queue active)
Jan 18 22:55:01 ip-xxx-xx-xx-xx postfix/local[22542]: warning: dict_nis_init: NIS domain name not set - NIS lookups disabled
Jan 18 22:55:01 ip-xxx-xx-xx-xx postfix/local[22542]: 7FF494449A: to=<root#ip-xxx-xx-xx-xx.localdomain>, orig_to=<root>, relay=local, delay=0.03, delays=0.02/0/0/0, dsn=2.0.0, status=sent (delivered to mailbox)
Jan 18 22:55:01 ip-xxx-xx-xx-xx postfix/qmgr[21671]: 7FF494449A: removed
Note: The absolute path in the cron job did the trick. Documented the various hiccups here.
Cron doesn't use the login shell's environment variables, so ~ might not resolve to your current user's HOME directory as it would in your manual tests. Try replacing this with the absolute path (e.g., /home/sarul/aws-script-mon/mon-put-instance-data.pl and see if it runs the script correctly.
If you're using local AWS credentials in the user's environment or ~/.aws/config rather than an instance profile, you might need to add these credentials somewhere accessible by cron as well.
Also note that the postfix syslog entries indicate that a mail message of some sort is being queued - perhaps related to an error reported by the script invoked by cron.
I am getting these errors from MailEnable, the OS is CentOS. The errors are from /var/log/maillog as suggested by #OlegNeumyvakin.
Sep 8 03:33:12 localhost journal: plesk sendmail[38416]: handlers_stderr:$
Sep 8 03:33:12 localhost journal: plesk sendmail[38416]: SKIP during call$
Sep 8 03:33:12 localhost postfix/pickup[35664]: 66B7B21F2D4F: uid=0 from=$
Sep 8 03:33:12 localhost postfix/cleanup[38422]: 66B7B21F2D4F: message-id$
Sep 8 03:33:12 localhost postfix/qmgr[9634]: 66B7B21F2D4F: from=<root#loc$
The email cannot send nor receive anything. I am trying to get it to work since it is for a site and it needs to send/receive emails.
You can check your virtual address by command:
postmap -q mail#example.tld hash:/var/spool/postfix/plesk/virtual
virtual.db is Berkeley DB file
you can check it content with Berkeley DB dump util:
# db5.1_dump -p /var/spool/postfix/plesk/virtual.db
VERSION=3
format=print
type=hash
h_nelem=4103
db_pagesize=4096
HEADER=END
drweb#example.tld\00
drweb#localhost.localdomain\00
kluser#example.tld\00
kluser#localhost.localdomain\00
mail1#example.tld\00
mail1#example.tld\00
postmaster#example.tld\00
postmaster#localhost.localdomain\00
root#dexample.tld\00
root#localhost.localdomain\00
anonymous#example.tld\00
anonymous#localhost.localdomain\00
mailer-daemon#example.tld\00
mailer-daemon#localhost.localdomain\00
DATA=END
you can install this util with yum install libdb-utils
Also in case you have issues with sending mail you can check limitations on outgoing email messages at Tools & settings > Mail Server Settings and if you have enabled them Tools & settings > Outgoing Mail Control
I'm a student from korea
first, i'm sorry about my low level english :)
I'm make a web service using AWS + nginx + django
I connect to AWS instance(ubuntu) using SSH protocol
Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.13.0-74-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Sat Apr 30 07:03:51 UTC 2016
System load: 0.0 Processes: 105
Usage of /: 23.8% of 7.74GB Users logged in: 0
Memory usage: 14% IP address for eth0: 172.31.17.137
Swap usage: 0%
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
21 packages can be updated.
17 updates are security updates.
Last login: Sat Apr 30 07:03:52 2016 from 210.103.124.253
pyenv-virtualenv: no virtualenv has been activated.
and
manage.py runserver --settings=abc.settings.production
So everyone can access my web service!
but.... after 30miniute
the SSL connection is broken itself....
export this message
packet_write_wait: Connection to 52.69.xxx.xxx: Broken pipe
and nobody can't access my web service...
so... my web site can't access when my computer was power off, none SSL connection...
I want everyone can access my web service 24/7
please give me a method thank you :)
When you want to run a command that continues after your current shell terminates, you should use the nohup command to launch it.
That causes the process to be detached from its initial parent shell so it is not killed when the parent terminates.