I had a Linux (Debian 9) VM running inside the GCP, I can ssh to it via PuTTY. Now I want to use VNC to connect it and failed.
The following steps are what I did so far.
I tried to follow the article (https://linuxize.com/post/how-to-install-and-configure-vnc-on-debian-9/) to set up a vnc server and it looks good.
clin4#chen-k8s-master:~$ sudo systemctl status vncserver#1.service
vncserver#1.service - Remote desktop service (VNC)
Loaded: loaded (/etc/systemd/system/vncserver#.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-04-03 00:41:24 UTC; 17h ago
Process: 734 ExecStartPre=/bin/sh -c /usr/bin/vncserver -kill :1 > /dev/null 2>&1 || : (code=exited, status=0/SUCCESS)
Main PID: 956 (vncserver)
Tasks: 0 (limit: 4915)
CGroup: /system.slice/system-vncserver.slice/vncserver#1.service
‣ 956 /usr/bin/perl /usr/bin/vncserver :1 -geometry 1440x900 -alwaysshared -fg
Apr 03 00:41:23 chen-k8s-master systemd[1]: Starting Remote desktop service (VNC)...
Apr 03 00:41:23 chen-k8s-master systemd[734]: pam_unix(login:session): session opened for user clin4 by (uid=0)
Apr 03 00:41:24 chen-k8s-master systemd[1]: Started Remote desktop service (VNC).
Apr 03 00:41:25 chen-k8s-master systemd[956]: pam_unix(login:session): session opened for user clin4 by (uid=0)
I open the port 5901 (5901-5910) via firewalld
clin4#chen-k8s-master:~$ sudo firewall-cmd --list-all
public
target: default
icmp-block-inversion: no
interfaces:
sources:
services: ssh dhcpv6-client
ports: 443/tcp 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 10255/tcp 6783/tcp 30000-32767/tcp 5901-5910/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Use netstat to check
clin4#chen-k8s-master:~$ sudo netstat -tulpn | grep LISTEN
tcp 0 0 127.0.0.1:5901 0.0.0.0:* LISTEN 1003/Xtigervnc
tcp6 0 0 ::1:5901 :::* LISTEN 1003/Xtigervnc
Create a firewall rule in the GCP, tags mapping on tcp:5901, and the VM has this tag.
remote-access Ingress remote-access IP ranges: 0.0.0.0/0 tcp:6443,3389,5900-5910 Allow 1000
Try to use Chrome VNC viewer to connect to the VM public IP with port 5901 and got the error message "Cannot establish connection. Are you sure you have entered the correct network address, and port number if necessary?"
What did I miss?
Related
I have installed the Nginx in my ec2 machine
The ubuntu version is
Distributor ID:Ubuntu
Description:Ubuntu 20.04.3 LTS
Release:20.04
Codename:focal
I have installed using the following command
sudo apt install Nginx
After that, I can able to see that the Nginx service is up and running
nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2022-06-19 07:27:28 UTC; 9min ago
Docs: man:nginx(8)
Process: 4938 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 4940 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 4941 (nginx)
Tasks: 3 (limit: 1116)
Memory: 4.9M
CGroup: /system.slice/nginx.service
├─4941 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
├─5003 nginx: worker process
└─5004 nginx: worker process
Jun 19 07:27:28 ip-172-31-42-178 systemd[1]: Starting A high performance web server and a reverse proxy server...
Jun 19 07:27:28 ip-172-31-42-178 systemd[1]: Started A high performance web server and a reverse proxy server.
But when i access the ip of the instance i m getting the error site can't be reached
Security rule groups is added for port 80 and 443.
SSH is active on port 22.
Checked the var/log/nginx but seems all files are empty.
UPDATE
When I check the ufw status using the command sudo ufw status i can see that Status: inactive
But not sure if I enabled the ufw via sudo ufw allow 'Nginx HTTP' it will impact the current security rule group settings.
In Google Compute Engine, I would like to use port 22 for SFTP although I cannot since the VM says that there is sshd running on this port. Is there any way I can change the port sshd uses to a different one so I can free up 22?
I tried to look at: How to change sshd port on google cloud instance?, but it did not help and the port for sshd was still 22 after I executed:
sudo netstat -pna | grep 22
The output is:
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 53151/sshd
tcp6 0 0 :::22 :::* LISTEN 53151/sshd
Thank you for your time!
You can change the SSH port 22 as per below steps:
Log on to the server as an root user
Open the SSH configuration file sshd_config with the text editor vi: vi /etc/ssh/sshd_config.
Search for the entry Port 22.
Replace port 22 with a port between 1024 and 65536.
semanage port -a -t ssh_port_t -p tcp New-SSH-Port
semanage port -l | grep ssh
ssh_port_t tcp 2222, 22
systemctl restart sshd
netstat -pna | grep 2222
tcp 0 0 0.0.0.0:2222 0.0.0.0:* LISTEN 1525/sshd
tcp 0 0 10.128.0.33:2222 35.235.241.19:63372 ESTABLISHED 1413/sshd: mdmahboo
tcp6 0 0 :::2222 :::* LISTEN 1525/sshd
systemctl status sshd
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-02-21 02:46:55 UTC; 46s ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 1525 (sshd)
CGroup: /system.slice/sshd.service
└─1525 /usr/sbin/sshd -D
Feb 21 02:46:55 cenos-1 systemd[1]: Stopped OpenSSH server daemon.
Feb 21 02:46:55 cenos-1 systemd[1]: Starting OpenSSH server daemon...
Feb 21 02:46:55 cenos-1 sshd[1525]: Server listening on 0.0.0.0 port 2222.
Feb 21 02:46:55 cenos-1 sshd[1525]: Server listening on :: port 2222.
Feb 21 02:46:55 cenos-1 systemd[1]: Started OpenSSH server daemon.
Add firewall entry for port 2222 in GCP firewall
Now, you will be able to login to the VM using your custom port number after allowing the port as ingress in firewall rule.
I am running Ubuntu 20.04 on an EC2 instance in AWS. For some reason when the server is rebooted, Nginx starts but there is no service listing on port 80. When using SSH I can run sudo service Nginx reload and everything starts running. Ideas, help and suggestions are appreciated.
See output after a fresh reboot.
ubuntu#ip-xxx-xxx-xxx-xxx:~$ ps aux | grep nginx
root 504 0.0 0.1 57184 1460 ? Ss 10:21 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 508 0.0 0.5 57944 5368 ? S 10:21 0:00 nginx: worker process
ubuntu 1039 0.0 0.2 8160 2572 pts/0 S+ 10:44 0:00 grep --color=auto nginx
ubuntu#ip-xxx-xxx-xxx-xxx:~$ sudo service nginx status
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-02-09 10:21:59 UTC; 8min ago
Docs: man:nginx(8)
Process: 460 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 503 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 504 (nginx)
Tasks: 2 (limit: 1164)
Memory: 9.2M
CGroup: /system.slice/nginx.service
├─504 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─508 nginx: worker process
Feb 09 10:21:59 ip-xxx-xxx-xxx-xxx systemd[1]: Starting A high performance web server and a reverse proxy server...
Feb 09 10:21:59 ip-xxx-xxx-xxx-xxx systemd[1]: Started A high performance web server and a reverse proxy server.
ubuntu#ip-xxx-xxx-xxx-xxx:~$ sudo netstat -tanpl|grep nginx
ubuntu#ip-xxx-xxx-xxx-xxx:~$ sudo netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 397/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 628/sshd: /usr/sbin
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/init
tcp6 0 0 :::22 :::* LISTEN 628/sshd: /usr/sbin
tcp6 0 0 :::111 :::* LISTEN 1/init
udp 0 0 xxx.xxx.xxx.xxx 0.0.0.0:* 397/systemd-resolve
udp 0 0 xxx.xxx.xxx.xxx:68 0.0.0.0:* 394/systemd-network
udp 0 0 0.0.0.0:111 0.0.0.0:* 1/init
udp6 0 0 :::111 :::* 1/init
Update 1
It looks like this is an issue with EFS. Nginx configs were saved on an EFS volume and Nginx is loading before the EFS volume has been mounted. I will continue to investigate and read, updates to follow.
Nginx loads services based on configs set in /lib/systemd/system/ (Path may change depending on the OS version).
It would seem Nginx is starting before Ubuntu mounts the network drives on startup. Because Nginx was told to load all configs from a directory on the EFS volume, it could not load the configs when launched as they do not exist until the filesystems finish mounting.
This can be solved by telling Nginx to wait for the specific file system to be mounted before starting.
Edit the service file by running:
Sudo nano /lib/systemd/system/nginx.service
Add the following line under the [Service] section, amending your mount target as needed:
After=network.target var-www\x2defs.mount
If you do not know the mount target you can do the following to find out, replace dir with the dir you mount the drive to.
systemctl list-units --type=mount | grep /var/www
Article / Post where I found the information to solve the issue: https://unix.stackexchange.com/questions/246935/set-systemd-service-to-execute-after-fstab-mount
I have installed redis in my AWS server. I have followed this: https://www.digitalocean.com/community/tutorials/how-to-install-secure-redis-centos-7
$ systemctl start redis.service
$ systemctl enable redis
-> Created symlink /etc/systemd/system/multi-user.target.wants/redis.service → /usr/lib/systemd/system/redis.service.
$ systemctl status redis.service
● redis.service - Redis persistent key-value database
Loaded: loaded (/usr/lib/systemd/system/redis.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/redis.service.d
└─limit.conf
Active: failed (Result: exit-code) since Wed 2020-08-26 02:28:25 UTC; 10s ago
Main PID: 5012 (code=exited, status=1/FAILURE)
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: Starting Redis persistent key-value database...
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: Started Redis persistent key-value database.
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: redis.service: Main process exited, code=exited, status=1/FAILURE
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: redis.service: Failed with result 'exit-code'.
And when I check the /var/log/redis/redis.log this is what I see:
5012:C 26 Aug 2020 02:28:25.574 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
5012:C 26 Aug 2020 02:28:25.574 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=5012, just started
5012:C 26 Aug 2020 02:28:25.574 # Configuration loaded
5012:C 26 Aug 2020 02:28:25.574 * supervised by systemd, will signal readiness
5012:M 26 Aug 2020 02:28:25.575 # Could not create server TCP listening socket 127.0.0.1:6379: bind: Address already in use
And upon checking the ports:
$ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 2812/redis-server *
tcp6 0 0 :::6379 :::* LISTEN 2812/redis-server *
This is showing the port 6379 is actually being used by redis-server.
So why cannot it start then?
Do I need to add any inbound/outbound rules in AWS? Please help.
UPDATE
ExecStart=/usr/bin/redis-server /etc/redis.conf --supervised systemd command on terminal returns bash: /etc/redis.conf: Permission denied. Looks like need to give right permission to /etc/redis.conf file.
$ ls -l /etc/redis.conf
-rw-r-----. 1 redis redis 62189 Aug 26 03:04 /etc/redis.conf
So what permission do I need to give here? Who should own this file?
sh-3.2# vagrant -v
Vagrant 1.4.3
sh-3.2# VBoxManage -v
4.3.6r91406
sh-3.2#
iptables has been removed...
On the vagrant machine I see the port and it responds.
[vagrant#localhost ~]$ nmap localhost
Starting Nmap 5.51 ( http://nmap.org ) at 2014-02-04 04:00 CET
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00028s latency).
Other addresses for localhost (not scanned): 127.0.0.1
Not shown: 998 closed ports
PORT STATE SERVICE
111/tcp open rpcbind
8000/tcp open http-alt
Nmap done: 1 IP address (1 host up) scanned in 0.05 seconds
[vagrant#localhost ~]$ curl localhost:8000
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li>.bash_history
<li>.bash_logout
<li>.bash_profile
<li>.bashrc
<li>.ssh/
<li>.vbox_version
<li>postinstall.sh
</ul>
<hr>
</body>
</html>
[vagrant#localhost ~]$
My ports are forwarded...
[web1] -- 22 => 2222 (adapter 1)
[web1] -- 80 => 8080 (adapter 1)
[web1] -- 8000 => 8081 (adapter 1)
Now... the port on the host looks open...
Starting Nmap 6.40 ( http://nmap.org ) at 2014-02-03 19:02 PST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000014s latency).
Not shown: 811 closed ports, 183 filtered ports
PORT STATE SERVICE
22/tcp open ssh
631/tcp open ipp
2222/tcp open EtherNet/IP-1
7778/tcp open interwise
8080/tcp open http-proxy
8081/tcp open blackice-icecap
Nmap done: 1 IP address (1 host up) scanned in 5.23 seconds
But curl never returns...
sh-3.2# curl localhost:8081
Vagrant/VirtualBox forwards the port to the first (NAT) adapter, so you need to bind your web server to it's IP, or to 0.0.0.0. Probably it's then also easier to add a private_network address and skip port forwarding altogether.