I have a Google cloud VM instance with Debian OS. I have hosted Wordpress sites. After upgrading OS version all was working fine and I was able to connect via SSH using 'Open SSH in browser' option.
Now I try to connect my VM instance using 'Open SSH in browser' it just keep retrying. I checked the serial console output but there is no error message. Please refer below
However, I am able to connect via FTP using same key but when I try to connect via SSH at that time facing the issue. I checked for port 22 for that instance and project and it's open
Below is the Last few lines of serial console log after I restarted the VM,
Dec 6 09:01:20 localhost sendmail[383]: Starting Mail Transport Agent (MTA): sendmail.
Dec 6 09:01:20 localhost systemd[1]: Started LSB: powerful, efficient, and scalable Mail Transport Agent.
Dec 6 09:01:21 localhost systemd[1]: Started MariaDB 10.3.31 database server.
Dec 6 09:01:21 localhost systemd[1]: Reached target Multi-User System.
Dec 6 09:01:21 localhost systemd[1]: Reached target Graphical InterfacDec 6 09:01:21 localhost systemd[1]: Startup finished in 4.063s (kernel) + 9.852s (userspace) = 13.915s.
Dec 6 09:01:21 localhost /etc/mysql/debian-start[567]: Upgrading MySQL tables if necessary.
Dec 6 09:01:21 localhost /etc/mysql/debian-start[570]: /usr/bin/mysql_upgrade: the '--basedir' option is always ignored
Dec 6 09:01:21 localhost /etc/mysql/debian-start[570]: Looking for 'mysql' as: /usr/bin/mysql
Dec 6 09:01:21 localhost /etc/mysql/debian-start[570]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck
Dec 6 09:01:21 localhost /etc/mysql/debian-start[570]: Version check failed. Got the following error when calling the 'mysql' command line client
Dec 6 09:01:21 localhost /etc/mysql/debian-start[570]: ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: NO)
Dec 6 09:01:21 localhost /etc/mysql/debian-start[570]: FATAL ERROR: Upgrade failed
Dec 6 09:01:21 localhost /etc/mysql/debian-start[580]: Checking for insecure root accounts.
Dec 6 09:01:21 localhost debian-start[564]: ERROR 1045 (28000): Access denied for user 'root'#'localhost' (using password: NO)
Debian GNU/Linux 10 localhost ttyS0
localhost login: Dec 6 09:01:28 localhost systemd[1]: Stopping User Manager for UID 110...
Dec 6 09:01:28 localhost systemd[497]: Stopped target Default.
Dec 6 09:01:28 localhost systemd[497]: Stopped target Basic System.
Dec 6 09:01:28 localhost systemd[497]: Stopped target Timers.
Dec 6 09:01:28 localhost systemd[497]: Stopped target Paths.
Dec 6 09:01:28 localhost systemd[497]: Stopped target Sockets.
Dec 6 09:01:28 localhost systemd[497]: gpg-agent-browser.socket: Succeeded.
Dec 6 09:01:28 localhost systemd[497]: Closed GnuPG cryptographic agent and passphrase cache (access for web browsers).
Dec 6 09:01:28 localhost systemd[497]: dirmngr.socket: Succeeded.
Dec 6 09:01:28 localhost systemd[497]: Closed GnuPG network certificate management daemon.
Dec 6 09:01:28 localhost systemd[497]: gpg-agent-ssh.socket: Succeeded.
Dec 6 09:01:28 localhost systemd[497]: Closed GnuPG cryptographic agent (ssh-agent emulation).
Dec 6 09:01:28 localhost systemd[497]: gpg-agent.socket: Succeeded.
Dec 6 09:01:28 localhost systemd[497]: Closed GnuPG cryptographic agent and passphrase cache.
Dec 6 09:01:28 localhost systemd[497]: gpg-agent-extra.socket: Succeeded.
Dec 6 09:01:28 localhost systemd[497]: Closed GnuPG cryptographic agent and passphrase cache (restricted).
Dec 6 09:01:28 localhost systemd[497]: Reached target Shutdown.
Dec 6 09:01:28 localhost systemd[497]: systemd-exit.service: Succeeded.
Dec 6 09:01:28 localhost systemd[497]: Started Exit the Session.
Dec 6 09:01:28 localhost systemd[497]: Reached target Exit the Session.
Dec 6 09:01:28 localhost systemd[1]: user#110.service: Succeeded.
Dec 6 09:01:28 localhost systemd[1]: Stopped User Manager for UID 110.
Dec 6 09:01:28 localhost systemd[1]: Stopping User Runtime Directory /run/user/110...
Dec 6 09:01:28 localhost systemd[1]: run-user-110.mount: Succeeded.
Dec 6 09:01:28 localhost systemd[1]: user-runtime-dir#110.service: Succeeded.
Dec 6 09:01:28 localhost systemd[1]: Stopped User Runtime Directory /run/user/110.
Dec 6 09:01:28 localhost systemd[1]: Removed slice User Slice of UID 110.
Tried following solutions which I get from google search
Solution 1 : Using PuTTYGen & Putty
Generated key using PuttyGen and put the public key under meta data as well tried adding under instance. I have set enable-oslogin to FALSE.
But got the following error message.
Solution 2 : Using serial ports
When I tried to connect using diff serial ports it just stack at connection screen and I checked the console log for that serial port but it's blank.
Solution 3 : New Instance with disk image
Created image of current disk and then created new instance with that image. When I try to connect to that new instance then I am facing same issue.
Solution 4 : Use diff machine to setup CLI
I setup fresh Google Cloud CLI into a new machine and tried to connect but no success. Same error I faced.
Solution 5 : Increase Disk Space
Increased the disk space from 20GB to 35GB, but didn't work. Usually if there is disk space error the we get it into serial console log. But in my case there is no error message in serial console log.
Please help and let me know if any additional information is required.
Thanks
Related
I have installed redis in my AWS server. I have followed this: https://www.digitalocean.com/community/tutorials/how-to-install-secure-redis-centos-7
$ systemctl start redis.service
$ systemctl enable redis
-> Created symlink /etc/systemd/system/multi-user.target.wants/redis.service → /usr/lib/systemd/system/redis.service.
$ systemctl status redis.service
● redis.service - Redis persistent key-value database
Loaded: loaded (/usr/lib/systemd/system/redis.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/redis.service.d
└─limit.conf
Active: failed (Result: exit-code) since Wed 2020-08-26 02:28:25 UTC; 10s ago
Main PID: 5012 (code=exited, status=1/FAILURE)
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: Starting Redis persistent key-value database...
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: Started Redis persistent key-value database.
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: redis.service: Main process exited, code=exited, status=1/FAILURE
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: redis.service: Failed with result 'exit-code'.
And when I check the /var/log/redis/redis.log this is what I see:
5012:C 26 Aug 2020 02:28:25.574 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
5012:C 26 Aug 2020 02:28:25.574 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=5012, just started
5012:C 26 Aug 2020 02:28:25.574 # Configuration loaded
5012:C 26 Aug 2020 02:28:25.574 * supervised by systemd, will signal readiness
5012:M 26 Aug 2020 02:28:25.575 # Could not create server TCP listening socket 127.0.0.1:6379: bind: Address already in use
And upon checking the ports:
$ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 2812/redis-server *
tcp6 0 0 :::6379 :::* LISTEN 2812/redis-server *
This is showing the port 6379 is actually being used by redis-server.
So why cannot it start then?
Do I need to add any inbound/outbound rules in AWS? Please help.
UPDATE
ExecStart=/usr/bin/redis-server /etc/redis.conf --supervised systemd command on terminal returns bash: /etc/redis.conf: Permission denied. Looks like need to give right permission to /etc/redis.conf file.
$ ls -l /etc/redis.conf
-rw-r-----. 1 redis redis 62189 Aug 26 03:04 /etc/redis.conf
So what permission do I need to give here? Who should own this file?
I'm trying to figure out why my HTTPS sites go down everytime my server's DHCP lease gets renewed.
It happens consistently, but HTTP sites continue to work just fine.
Restarting systemd-networkd brings the sites back, but until that happens the HTTPS sites are basically unreachable.
Any tips on where to look first?
The weird thing is these sites come back after the next DHCP lease renewal, then I lose connectivity on the next one, then it comes back, then I lose it, on and on.
This is what I see in syslog when it happens.
Apr 13 18:06:25 www-1 systemd-networkd[13973]: ens4: DHCP lease lost
Apr 13 18:06:25 www-1 systemd-networkd[13973]: ens4: DHCPv4 address 10.138.0.29/32 via 10.138.0.1
Apr 13 18:06:25 www-1 systemd-networkd[13973]: ens4: IPv6 successfully enabled
Apr 13 18:06:25 www-1 dbus-daemon[579]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.231' (uid=101 pid=13973 comm="/lib/systemd/systemd-networkd " label="unconfined")
Apr 13 18:06:25 www-1 systemd-networkd[13973]: ens4: Configured
Apr 13 18:06:25 www-1 systemd[1]: Starting Hostname Service...
Apr 13 18:06:25 www-1 dbus-daemon[579]: [system] Successfully activated service 'org.freedesktop.hostname1'
Apr 13 18:06:25 www-1 systemd[1]: Started Hostname Service.
Apr 13 18:06:25 www-1 systemd-hostnamed[17589]: Changed host name to 'www-1.us-west1-b.c.camp-fire-259800.internal'
This issue seems to be related to the following:
https://moss.sh/name-resolution-issue-systemd-resolved/
and
https://github.com/systemd/systemd/issues/9243
I've disabled systemd-resolved and am using a static /etc/resolv.conf copied from /run/systemd/resolve/resolv.conf
For internal DNS I'm using a private Google DNS Zone.
Thanks.
I've been working on systemctl for a while and it's just not working.
I have tested with gunicorn and it works perfectly, but when I'm doing this
systemctl daemon-reload
systemctl enable flaskapp
systemctl start flaskapp
However, I've been getting issues like this
Dec 2 16:17:15 localhost systemd[1]: Started flaskapp.
Dec 2 16:17:15 localhost systemd[2130]: flaskapp.service: Failed at step EXEC spawning /home/flaskappuser/flaskapp/flaskvenv/bin/gunicorn: Permission denied
Dec 2 16:17:15 localhost systemd[1]: flaskapp.service: Main process exited, code=exited, status=203/EXEC
Dec 2 16:17:15 localhost systemd[1]: flaskapp.service: Unit entered failed state.
Dec 2 16:17:15 localhost systemd[1]: flaskapp.service: Failed with result 'exit-code'.
Dec 2 16:17:15 localhost systemd[1]: flaskapp.service: Service hold-off time over, scheduling restart.
Dec 2 16:17:15 localhost systemd[1]: Stopped flaskapp.
Dec 2 16:17:15 localhost systemd[1]: flaskapp.service: Start request repeated too quickly.
Dec 2 16:17:15 localhost systemd[1]: Failed to start flaskapp.
Dec 2 16:17:21 localhost systemd[1]: supervisor.service: Service hold-off time over, scheduling restart.
Dec 2 16:17:21 localhost systemd[1]: Stopped Supervisor process control system for UNIX.
Dec 2 16:17:21 localhost systemd[1]: Started Supervisor process control system for UNIX.
I tried with sudo and it doesn't work either. I have searched everything online but no result. Can anyone help me with this?
I am getting these errors from MailEnable, the OS is CentOS. The errors are from /var/log/maillog as suggested by #OlegNeumyvakin.
Sep 8 03:33:12 localhost journal: plesk sendmail[38416]: handlers_stderr:$
Sep 8 03:33:12 localhost journal: plesk sendmail[38416]: SKIP during call$
Sep 8 03:33:12 localhost postfix/pickup[35664]: 66B7B21F2D4F: uid=0 from=$
Sep 8 03:33:12 localhost postfix/cleanup[38422]: 66B7B21F2D4F: message-id$
Sep 8 03:33:12 localhost postfix/qmgr[9634]: 66B7B21F2D4F: from=<root#loc$
The email cannot send nor receive anything. I am trying to get it to work since it is for a site and it needs to send/receive emails.
You can check your virtual address by command:
postmap -q mail#example.tld hash:/var/spool/postfix/plesk/virtual
virtual.db is Berkeley DB file
you can check it content with Berkeley DB dump util:
# db5.1_dump -p /var/spool/postfix/plesk/virtual.db
VERSION=3
format=print
type=hash
h_nelem=4103
db_pagesize=4096
HEADER=END
drweb#example.tld\00
drweb#localhost.localdomain\00
kluser#example.tld\00
kluser#localhost.localdomain\00
mail1#example.tld\00
mail1#example.tld\00
postmaster#example.tld\00
postmaster#localhost.localdomain\00
root#dexample.tld\00
root#localhost.localdomain\00
anonymous#example.tld\00
anonymous#localhost.localdomain\00
mailer-daemon#example.tld\00
mailer-daemon#localhost.localdomain\00
DATA=END
you can install this util with yum install libdb-utils
Also in case you have issues with sending mail you can check limitations on outgoing email messages at Tools & settings > Mail Server Settings and if you have enabled them Tools & settings > Outgoing Mail Control
I am trying to install nginx on a rhel 7 and it says process doesn't start. Following is the log.
Nov 13 06:36:42 ip-10-0-0-10.ec2.internal systemd[1]: Starting nginx -
high performance web server...**
Nov 13 06:36:42 ip-10-0-0-10.ec2.internal nginx[30974]: nginx: the
configuration file /etc/nginx/nginx.conf syntax is ok
Nov 13 06:36:42 ip-10-0-0-10.ec2.internal nginx[30974]: nginx: [emerg]
open() "/mnt/nginx_logs/pubstore/access.log" failed (13: Permission
denied)
Nov 13 06:36:42 ip-10-0-0-10.ec2.internal nginx[30974]: nginx:
configuration file /etc/nginx/nginx.conf test failed
Nov 13 06:36:42 ip-10-0-0-10.ec2.internal systemd[1]: nginx.service:
control process exited, code=exited status=1
Nov 13 06:36:42 ip-10-0-0-10.ec2.internal systemd[1]: Failed to start
nginx - high performance web server.
Nov 13 06:36:42 ip-10-0-0-10.ec2.internal systemd[1]: Unit
nginx.service entered failed state.**
The permission of the file access log is as follows. I have given permission but still it doesn't start.
-rwxrwxrwx. 1 nginx nginx 0 Nov 13 02:07 access.log
-rwxrwxrwx. 1 nginx nginx 0 Nov 13 02:07 error.log
The installation is done on a puppet agent on amazon ec2 instance
This line:
Nov 13 06:36:42 ip-10-0-0-10.ec2.internal nginx[30974]: nginx: [emerg] open() "/mnt/nginx_logs/pubstore/access.log" failed (13: Permission denied)
Tells you that the user you are running nginx as, does not have access to write to the log file its configured to write to.
Since the logs are being stored in a non-standard location, you will likely have to ensure that the directory you want to store logs in, is writable by the same user that nginx is running as.