In Google Compute Engine, I would like to use port 22 for SFTP although I cannot since the VM says that there is sshd running on this port. Is there any way I can change the port sshd uses to a different one so I can free up 22?
I tried to look at: How to change sshd port on google cloud instance?, but it did not help and the port for sshd was still 22 after I executed:
sudo netstat -pna | grep 22
The output is:
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 53151/sshd
tcp6 0 0 :::22 :::* LISTEN 53151/sshd
Thank you for your time!
You can change the SSH port 22 as per below steps:
Log on to the server as an root user
Open the SSH configuration file sshd_config with the text editor vi: vi /etc/ssh/sshd_config.
Search for the entry Port 22.
Replace port 22 with a port between 1024 and 65536.
semanage port -a -t ssh_port_t -p tcp New-SSH-Port
semanage port -l | grep ssh
ssh_port_t tcp 2222, 22
systemctl restart sshd
netstat -pna | grep 2222
tcp 0 0 0.0.0.0:2222 0.0.0.0:* LISTEN 1525/sshd
tcp 0 0 10.128.0.33:2222 35.235.241.19:63372 ESTABLISHED 1413/sshd: mdmahboo
tcp6 0 0 :::2222 :::* LISTEN 1525/sshd
systemctl status sshd
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-02-21 02:46:55 UTC; 46s ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 1525 (sshd)
CGroup: /system.slice/sshd.service
└─1525 /usr/sbin/sshd -D
Feb 21 02:46:55 cenos-1 systemd[1]: Stopped OpenSSH server daemon.
Feb 21 02:46:55 cenos-1 systemd[1]: Starting OpenSSH server daemon...
Feb 21 02:46:55 cenos-1 sshd[1525]: Server listening on 0.0.0.0 port 2222.
Feb 21 02:46:55 cenos-1 sshd[1525]: Server listening on :: port 2222.
Feb 21 02:46:55 cenos-1 systemd[1]: Started OpenSSH server daemon.
Add firewall entry for port 2222 in GCP firewall
Now, you will be able to login to the VM using your custom port number after allowing the port as ingress in firewall rule.
Related
I am running Ubuntu 20.04 on an EC2 instance in AWS. For some reason when the server is rebooted, Nginx starts but there is no service listing on port 80. When using SSH I can run sudo service Nginx reload and everything starts running. Ideas, help and suggestions are appreciated.
See output after a fresh reboot.
ubuntu#ip-xxx-xxx-xxx-xxx:~$ ps aux | grep nginx
root 504 0.0 0.1 57184 1460 ? Ss 10:21 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
www-data 508 0.0 0.5 57944 5368 ? S 10:21 0:00 nginx: worker process
ubuntu 1039 0.0 0.2 8160 2572 pts/0 S+ 10:44 0:00 grep --color=auto nginx
ubuntu#ip-xxx-xxx-xxx-xxx:~$ sudo service nginx status
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-02-09 10:21:59 UTC; 8min ago
Docs: man:nginx(8)
Process: 460 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 503 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 504 (nginx)
Tasks: 2 (limit: 1164)
Memory: 9.2M
CGroup: /system.slice/nginx.service
├─504 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
└─508 nginx: worker process
Feb 09 10:21:59 ip-xxx-xxx-xxx-xxx systemd[1]: Starting A high performance web server and a reverse proxy server...
Feb 09 10:21:59 ip-xxx-xxx-xxx-xxx systemd[1]: Started A high performance web server and a reverse proxy server.
ubuntu#ip-xxx-xxx-xxx-xxx:~$ sudo netstat -tanpl|grep nginx
ubuntu#ip-xxx-xxx-xxx-xxx:~$ sudo netstat -tunlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 397/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 628/sshd: /usr/sbin
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/init
tcp6 0 0 :::22 :::* LISTEN 628/sshd: /usr/sbin
tcp6 0 0 :::111 :::* LISTEN 1/init
udp 0 0 xxx.xxx.xxx.xxx 0.0.0.0:* 397/systemd-resolve
udp 0 0 xxx.xxx.xxx.xxx:68 0.0.0.0:* 394/systemd-network
udp 0 0 0.0.0.0:111 0.0.0.0:* 1/init
udp6 0 0 :::111 :::* 1/init
Update 1
It looks like this is an issue with EFS. Nginx configs were saved on an EFS volume and Nginx is loading before the EFS volume has been mounted. I will continue to investigate and read, updates to follow.
Nginx loads services based on configs set in /lib/systemd/system/ (Path may change depending on the OS version).
It would seem Nginx is starting before Ubuntu mounts the network drives on startup. Because Nginx was told to load all configs from a directory on the EFS volume, it could not load the configs when launched as they do not exist until the filesystems finish mounting.
This can be solved by telling Nginx to wait for the specific file system to be mounted before starting.
Edit the service file by running:
Sudo nano /lib/systemd/system/nginx.service
Add the following line under the [Service] section, amending your mount target as needed:
After=network.target var-www\x2defs.mount
If you do not know the mount target you can do the following to find out, replace dir with the dir you mount the drive to.
systemctl list-units --type=mount | grep /var/www
Article / Post where I found the information to solve the issue: https://unix.stackexchange.com/questions/246935/set-systemd-service-to-execute-after-fstab-mount
I have installed redis in my AWS server. I have followed this: https://www.digitalocean.com/community/tutorials/how-to-install-secure-redis-centos-7
$ systemctl start redis.service
$ systemctl enable redis
-> Created symlink /etc/systemd/system/multi-user.target.wants/redis.service → /usr/lib/systemd/system/redis.service.
$ systemctl status redis.service
● redis.service - Redis persistent key-value database
Loaded: loaded (/usr/lib/systemd/system/redis.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/redis.service.d
└─limit.conf
Active: failed (Result: exit-code) since Wed 2020-08-26 02:28:25 UTC; 10s ago
Main PID: 5012 (code=exited, status=1/FAILURE)
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: Starting Redis persistent key-value database...
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: Started Redis persistent key-value database.
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: redis.service: Main process exited, code=exited, status=1/FAILURE
Aug 26 02:28:25 ip-xxx-xx-xx-xx.ap-southeast-2.compute.internal systemd[1]: redis.service: Failed with result 'exit-code'.
And when I check the /var/log/redis/redis.log this is what I see:
5012:C 26 Aug 2020 02:28:25.574 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
5012:C 26 Aug 2020 02:28:25.574 # Redis version=5.0.3, bits=64, commit=00000000, modified=0, pid=5012, just started
5012:C 26 Aug 2020 02:28:25.574 # Configuration loaded
5012:C 26 Aug 2020 02:28:25.574 * supervised by systemd, will signal readiness
5012:M 26 Aug 2020 02:28:25.575 # Could not create server TCP listening socket 127.0.0.1:6379: bind: Address already in use
And upon checking the ports:
$ netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 2812/redis-server *
tcp6 0 0 :::6379 :::* LISTEN 2812/redis-server *
This is showing the port 6379 is actually being used by redis-server.
So why cannot it start then?
Do I need to add any inbound/outbound rules in AWS? Please help.
UPDATE
ExecStart=/usr/bin/redis-server /etc/redis.conf --supervised systemd command on terminal returns bash: /etc/redis.conf: Permission denied. Looks like need to give right permission to /etc/redis.conf file.
$ ls -l /etc/redis.conf
-rw-r-----. 1 redis redis 62189 Aug 26 03:04 /etc/redis.conf
So what permission do I need to give here? Who should own this file?
I had a Linux (Debian 9) VM running inside the GCP, I can ssh to it via PuTTY. Now I want to use VNC to connect it and failed.
The following steps are what I did so far.
I tried to follow the article (https://linuxize.com/post/how-to-install-and-configure-vnc-on-debian-9/) to set up a vnc server and it looks good.
clin4#chen-k8s-master:~$ sudo systemctl status vncserver#1.service
vncserver#1.service - Remote desktop service (VNC)
Loaded: loaded (/etc/systemd/system/vncserver#.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-04-03 00:41:24 UTC; 17h ago
Process: 734 ExecStartPre=/bin/sh -c /usr/bin/vncserver -kill :1 > /dev/null 2>&1 || : (code=exited, status=0/SUCCESS)
Main PID: 956 (vncserver)
Tasks: 0 (limit: 4915)
CGroup: /system.slice/system-vncserver.slice/vncserver#1.service
‣ 956 /usr/bin/perl /usr/bin/vncserver :1 -geometry 1440x900 -alwaysshared -fg
Apr 03 00:41:23 chen-k8s-master systemd[1]: Starting Remote desktop service (VNC)...
Apr 03 00:41:23 chen-k8s-master systemd[734]: pam_unix(login:session): session opened for user clin4 by (uid=0)
Apr 03 00:41:24 chen-k8s-master systemd[1]: Started Remote desktop service (VNC).
Apr 03 00:41:25 chen-k8s-master systemd[956]: pam_unix(login:session): session opened for user clin4 by (uid=0)
I open the port 5901 (5901-5910) via firewalld
clin4#chen-k8s-master:~$ sudo firewall-cmd --list-all
public
target: default
icmp-block-inversion: no
interfaces:
sources:
services: ssh dhcpv6-client
ports: 443/tcp 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 10255/tcp 6783/tcp 30000-32767/tcp 5901-5910/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
Use netstat to check
clin4#chen-k8s-master:~$ sudo netstat -tulpn | grep LISTEN
tcp 0 0 127.0.0.1:5901 0.0.0.0:* LISTEN 1003/Xtigervnc
tcp6 0 0 ::1:5901 :::* LISTEN 1003/Xtigervnc
Create a firewall rule in the GCP, tags mapping on tcp:5901, and the VM has this tag.
remote-access Ingress remote-access IP ranges: 0.0.0.0/0 tcp:6443,3389,5900-5910 Allow 1000
Try to use Chrome VNC viewer to connect to the VM public IP with port 5901 and got the error message "Cannot establish connection. Are you sure you have entered the correct network address, and port number if necessary?"
What did I miss?
I have already opened the ports but its still not working.
From gcloud on my local machine:
C:\Program Files (x86)\Google\Cloud SDK>gcloud compute firewall-rules list
To show all fields of the firewall, please show in JSON format: --format=json
To show all fields in table format, please see the examples in --help.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY
default-allow-https default INGRESS 1000 tcp:443
default-allow-icmp default INGRESS 65534 icmp
default-allow-internal default INGRESS 65534 tcp:0-65535,udp:0-65535,icmp
default-allow-rdp default INGRESS 65534 tcp:3389
default-allow-ssh default INGRESS 65534 tcp:22
django default EGRESS 1000 tcp:8000,tcp:80,tcp:8080,tcp:443
django-in default INGRESS 1000 tcp:8000,tcp:80,tcp:8080,tcp:443
From the instance on google cloud:
admin-u5214628#instance-1:~$ wget localhost:8080
--2017-11-22 01:23:56-- http://localhost:8080/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response... 302 FOUND
Location: http://localhost:8080/login/?next=/ [following]
--2017-11-22 01:23:56-- http://localhost:8080/login/?next=/
Connecting to localhost (localhost)|127.0.0.1|:8080... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’
index.html [ <=> ] 6.26K --.-KB/s in 0s
2017-11-22 01:23:56 (161 MB/s) - ‘index.html’ saved [6411]
But via the external ip, nothing is shown:
http://35.197.1.158:8080/
I checked the port by the following command:
root#instance-1:/etc# netstat -ntlp | grep LISTEN
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 1539/redis-server 1
tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 2138/python
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1735/sshd
tcp6 0 0 :::22 :::* LISTEN 1735/sshd
I'm not sure if this is enough for the Ubuntu firewall setting? looks ok to me.
And on the instance, I checked everything I can think of.
And the UFW (uncomplicated firewall):
root#instance-1:~# ufw status
Status: inactive
From my understanding, this means it is off, so not blocking anything.
As suggested, I try to configure iptables:
iptables -P INPUT ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
Then I save it:
root#instance-1:~# iptables-save -c
# Generated by iptables-save v1.6.0 on Thu Nov 23 00:16:44 2017
*mangle
:PREROUTING ACCEPT [175:18493]
:INPUT ACCEPT [175:18493]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [154:15965]
:POSTROUTING ACCEPT [154:15965]
COMMIT
# Completed on Thu Nov 23 00:16:44 2017
# Generated by iptables-save v1.6.0 on Thu Nov 23 00:16:44 2017
*nat
:PREROUTING ACCEPT [6:300]
:INPUT ACCEPT [6:300]
:OUTPUT ACCEPT [6:360]
:POSTROUTING ACCEPT [6:360]
COMMIT
# Completed on Thu Nov 23 00:16:44 2017
# Generated by iptables-save v1.6.0 on Thu Nov 23 00:16:44 2017
*filter
:INPUT ACCEPT [169:18193]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [163:17013]
[6:300] -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT
[0:0] -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
COMMIT
# Completed on Thu Nov 23 00:16:44 2017
It looks like this now:
root#instance-1:~# iptables -v -n -x -L
Chain INPUT (policy ACCEPT 80 packets, 5855 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080
0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 52 packets, 6047 bytes)
pkts bytes target prot opt in out source destination
To make sure the rules are applied and live:
iptables-save > /etc/iptables.rules
iptables-apply /etc/iptables.rules
I don't think I need to restart/reset the instance.
I think I need to forward traffic to local ip:
# sysctl net.ipv4.ip_forward=1
# iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to-destination 127.0.0.1:8000
# iptables -t nat -A POSTROUTING -j MASQUERADE
# python manage.py runserver
Performing system checks...
System check identified no issues (0 silenced).
November 24, 2017 - 17:54:00
Django version 1.8.18, using settings 'codebench.settings'
Starting development server at http://127.0.0.1:8000/
This way did not work...
Tried:
python manage.py runserver 0.0.0.0:8080 &
This definitely worked on my local machine, just not on the google instance, I'm so puzzled.
In my experience, when I create an instance of a compute engine, I should explicitly flag that HTTP(S) access is allowed. That may be one thing to have a look at.
Another one - the OS you deploy/use in the compute engine instance might have its own firewall rules. They need to be amended accordingly.
Based on newly provided information about UFW and Ubuntu. I am not very confident with Ubuntu, but I understand that UFW is a wrapper around iptables. I may be wrong, but I guess it may be better if it is enabled. Then you might be able to get more details about the firewall configuration.
I believe the problem is the server only listening to 127.0.0.1:8080 not 0.0.0.0:8080, as it should be. That's the reason you are getting a reply with http://localhost:8080/ not
with http://35.197.1.158:8080 For more details checkout this answer from stackoverflow What is the difference between 0.0.0.0, 127.0.0.1 and localhost?
To change configuration for Apache to listen to 0.0.0.0:8080 or to a specific IP and port follow this document https://httpd.apache.org/docs/2.4/bind.html
sh-3.2# vagrant -v
Vagrant 1.4.3
sh-3.2# VBoxManage -v
4.3.6r91406
sh-3.2#
iptables has been removed...
On the vagrant machine I see the port and it responds.
[vagrant#localhost ~]$ nmap localhost
Starting Nmap 5.51 ( http://nmap.org ) at 2014-02-04 04:00 CET
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00028s latency).
Other addresses for localhost (not scanned): 127.0.0.1
Not shown: 998 closed ports
PORT STATE SERVICE
111/tcp open rpcbind
8000/tcp open http-alt
Nmap done: 1 IP address (1 host up) scanned in 0.05 seconds
[vagrant#localhost ~]$ curl localhost:8000
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
<title>Directory listing for /</title>
<body>
<h2>Directory listing for /</h2>
<hr>
<ul>
<li>.bash_history
<li>.bash_logout
<li>.bash_profile
<li>.bashrc
<li>.ssh/
<li>.vbox_version
<li>postinstall.sh
</ul>
<hr>
</body>
</html>
[vagrant#localhost ~]$
My ports are forwarded...
[web1] -- 22 => 2222 (adapter 1)
[web1] -- 80 => 8080 (adapter 1)
[web1] -- 8000 => 8081 (adapter 1)
Now... the port on the host looks open...
Starting Nmap 6.40 ( http://nmap.org ) at 2014-02-03 19:02 PST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000014s latency).
Not shown: 811 closed ports, 183 filtered ports
PORT STATE SERVICE
22/tcp open ssh
631/tcp open ipp
2222/tcp open EtherNet/IP-1
7778/tcp open interwise
8080/tcp open http-proxy
8081/tcp open blackice-icecap
Nmap done: 1 IP address (1 host up) scanned in 5.23 seconds
But curl never returns...
sh-3.2# curl localhost:8081
Vagrant/VirtualBox forwards the port to the first (NAT) adapter, so you need to bind your web server to it's IP, or to 0.0.0.0. Probably it's then also easier to add a private_network address and skip port forwarding altogether.