How to change sshd port on google cloud instance? - google-cloud-platform

I changed port in /etc/ssh/sshd_config to 23. I restarted sshd (sudo systemctl restart sshd). I added firewall rule for 23:
gcloud compute firewall-rules create debug-ssh-23 --allow tcp:23
But still is not working... Ssh commands times out. How to change sshd port properly?
EDIT:
Firewall rule is:
{
"allowed": [
{
"IPProtocol": "tcp",
"ports": [
"23"
]
}
],
"creationTimestamp": "2018-10-02T14:02:23.646-07:00",
"description": "",
"direction": "INGRESS",
"disabled": false,
"id": "3968818270732968496",
"kind": "compute#firewall",
"name": "debug-ssh-23",
"network": "https://www.googleapis.com/compute/v1/projects/foo/global/networks/default",
"priority": 1000,
"selfLink": "https://www.googleapis.com/compute/v1/projects/foo/global/firewalls/debug-ssh-23",
"sourceRanges": [
"0.0.0.0/0"
]
}
But I can't access simple nginx service on this port. On 80, works. Rule for 80 is similar.
sshd_config:
# Force protocol v2 only
Protocol 2
# Disable IPv6 for now
AddressFamily inet
# /etc is read-only. Fetch keys from stateful partition
# Not using v1, so no v1 key
HostKey /mnt/stateful_partition/etc/ssh/ssh_host_rsa_key
HostKey /mnt/stateful_partition/etc/ssh/ssh_host_ed25519_key
PasswordAuthentication no
ChallengeResponseAuthentication no
PermitRootLogin no
UsePAM yes
PrintMotd no
PrintLastLog no
UseDns no
Subsystem sftp internal-sftp
PermitTunnel no
AllowTcpForwarding yes
X11Forwarding no
Ciphers aes128-gcm#openssh.com,aes256-gcm#openssh.com,chacha20-poly1305#openssh.com,aes128-ctr,aes192-ctr,aes256-ctr
# Compute times out connections after 10 minutes of inactivity. Keep alive
# ssh connections by sending a packet every 7 minutes.
ClientAliveInterval 420
AcceptEnv EDITOR LANG LC_ALL PAGER TZ

besides sshd_config option Port, also see ListenAddress
run sudo systemctl reload sshd.service to apply the changes.
you need to add option ssh-flag in order to connect to another port:
gcloud compute --project "PROJECT_NAME" ssh --zone "us-central1-b" "instance-1" --ssh-flag="-p 23"
in the cloud console, there's also "open in a browser window on a custom port".
to see, if and where it is listening ...
sudo cat /var/log/secure | grep sshd
the output shoud look about like this:
instance-1 sshd[1192]: Server listening on 0.0.0.0 port 23.
instance-1 sshd[1192]: Server listening on :: port 23.

I did not need to add the ssh-flag to my gcloud command (which I could view but could not figure out how to edit). I followed these instructions:
Using SSH through airplane WiFi that blocks port 22
But my Centos installation had a blank sshd_config. I simply added this line to it:
Port 80
and ran (I had executed the commands in the link above first):
systemctl restart sshd.service
and then I was up and running SSHD on port 80.
Other things to note:
I was using this because I wanted to do work while on a JetBlue flight and I could not connect to my server using SSH (seems they block port 22 traffic and I don't want to change the port on which I am running SSHD). So, I created this VM to run SSH on port 80 and I could then connect from there to my server.
To save on my $300 in Google Cloud credit, I turned my VM instance off and when I was on the flight, I went to turn it on and there were not enough resources on that Google Cloud Zone to start my instance. Argh!! Set your VM instance to running before you leave on your flight to make sure it'll be available ahead of time. Moving it to another zone was a PITA, so I created a new instance and found I could connect to it even though it was set to run SSH on port 22 by default by connecting to it via the gcloud console's connect via SSH in a browser window, so it was not necessary to change the port upon which SSH was running anyway (at least for JetBlue)...
When I created this 2nd VM instance using the CENTOS 7 image, this time it created a full sshd_config file and I just changed the following line:
#Port 22
to:
Port 80
And also executed all the commands in the first link in my post.

Related

My docker container which is running inside AWS ElasticBeanstalk is not able to connect with the host

My application runs on port 5000 and I have exposed the 5000 port in the docker file.
This is my docker-compose.yml file
"services":
"backend":
"image": "<imageURL>"
"ports":
- "5000:8080"
Container port and application port: 5000
Server port: 8080
The security group of have also been configured properly and the application is able to connect with the database but not working as I try to ping that IP of the server.
My application has an ping API.
Not sure what are you referring to with EBS server as "EBS" is Elastic Block Storage, not the computing
If you're using AWS ECS you need to configure "PortMapping" to map external ports with the container ports
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_PortMapping.html
If you're using EC2 make sure that your have your service listening to all IPs using the netstat command
netstat -anlp | grep [your port]
and security group inbound and outbound rules are configured properly

Why can't my ubuntu ansible ping my aws machine?

I have /etc/ansible/hosts locally as:
[example]
172.31.20.nnn # nnn not shown, is 1-255
I created an aws ubuntu instance and a .pem file in my local directory I can log in ok
ubuntu#ip-172-31-20-nnn:~$ whoami
ubuntu
ubuntu#ip-172-31-20-nnn:~$
However when I try
ansible example -m ping -u ubuntu
I get
172.31.20.nnn | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: connect to host 172.31.20.nnn port 22: Connection timed out",
"unreachable": true
}
nnn is a number in all cases, just not shown here
Stuck on adding ping - i see the following showing 22. What should I do ?
I see this, but i don't see ping in the dropdown.
should i be deleting the existing 22 rule that was already there?
Tried that. No.
Your EC2 security group is blocking the ICMP requests. You just need to open the PING service on it

Accessing GCP Memorystore from local machines

Whats the best way to access Memorystore from Local Machines during development? Is there something like Cloud SQL Proxy that I can use to set up a tunnel?
You can spin up a Compute Engine instance and use port forwarding to connect to your Redis machine.
For example if your Redis machine has internal IP address 10.0.0.3 you'd do:
gcloud compute instances create redis-forwarder --machine-type=f1-micro
gcloud compute ssh redis-forwarder -- -N -L 6379:10.0.0.3:6379
As long as you keep the ssh tunnel open you can connect to localhost:6379
Update: this is now officially documented:
https://cloud.google.com/memorystore/docs/redis/connecting-redis-instance#connecting_from_a_local_machine_with_port_forwarding
I created a vm on google cloud
gcloud compute instances create redis-forwarder --machine-type=f1-micro
then ssh into it and installed haproxy
sudo su
apt-get install haproxy
then updated the config file
/etc/haproxy/haproxy.cfg
....existing file contents
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
I was then able to connect to memory store from my local machine for development
You can spin up a Compute Engine instance and setup an haproxy using the following docker image haproxy docker image then haproxy will forward your tcp requests to memorystore.
For example i want to access memorystore instance with ip 10.0.0.12 so added the following haproxy configs:
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server 10.0.0.12:6379 check
So now you can access memorystore from your local machine using the following command:
redis-cli -h <your-haproxy-public-ipaddress> -p 6379
Note: replace with you actual haproxy ip address.
Hope that can help you to solve your problem.
This post builds on earlier ones and should help you bypass firewall issues.
Create a virtual machine in the same region(and zone to be safe) as your Memorystore instance. On this machine:
Add a network tag with which we will create a firewall rule to allow traffic on port 6379
Add an external IP with which you will access this VM
SSH into this machine and install haproxy
sudo su
apt-get install haproxy
add the following below existing config in the /etc/haproxy/haproxy.cfg file
frontend redis_frontend
bind *:6379
mode tcp
option tcplog
timeout client 1m
default_backend redis_backend
backend redis_backend
mode tcp
option tcplog
option log-health-checks
option redispatch
log global
balance roundrobin
timeout connect 10s
timeout server 1m
server redis_server [MEMORYSTORE IP]:6379 check
restart haproxy
/etc/init.d/haproxy restart
Now create a firewall rule that allows traffic on port 6379 on the VM. Ensure:
It has the same target tag as the networking tag we created on the VM.
It allows traffic on port 6379 for the TCP protocol.
Now you should be able to connect remotely like so:
redis-cli -h [VM IP] -p 6379
Memorystore does not allow connecting from local machines, other ways like from CE, GAE are expensive especially your project is small or in developing phase, I suggest you create a cloud function to execute memorystore, it's serverless service which means lower fee to execute. I wrote small tool for this, the result is similar to run on local machine. You can check if help to you.
Like #Christiaan answered above, it almost worked for me but I needed a few other things to check to make it work well.
Firstly, in my case, my Redis is running in a specific network other than default network, so I had to create the jumpbox inside the same network (let's call it my-network)
Secondly, I needed to apply a firewall rule to open port 22 in that network.
So putting all my needed command it looks like this:
gcloud compute firewall-rules create default-allow-ssh --project=my-project --network my-network --allow tcp:22 --source-ranges 0.0.0.0/0
gcloud compute instances create jump-box --machine-type=f1-micro --project my-project --zone europe-west1-b --network my-network
gcloud compute ssh jump-box --project my-project --zone europe-west1-b -- -N -L 6379:10.177.174.179:6379
Then I have access to Redis locally on 6379

Amazon EC2 refused to connect, although port 80 is open, and SSH works fine

My EC2 instance has been working well with no problem for many years, but after Amazon's recent maintenance, the webpage cannot be reached. Chrome browser says:
This site can’t be reached (the below is the error that I see at Chrome)
xxx.xxx.xxx.xxx refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
SSH port (22) works fine; I can connect it with Cyberduck as usual. However other ports (80, 8080) do not work, although the security group has inbound rule to accept any IPs for the HTTP.
Edit: per request, this is what I see in my ec2 (connected with my pem key on Terminal)
$ netstat -an | grep 80
unix 2 [ ACC ] SEQPACKET LISTENING 8017 #/org/kernel/udev/udevd
unix 3 [ ] DGRAM 8026
unix 3 [ ] DGRAM 8025
I see "80"s are bold and red.
It sounds like the EC2 instance rebooted during maintenance and your web server is not set up to auto restart.
Use the chkconfig command to configure the Apache web server to start at each system boot: sudo chkconfig httpd on.

Deploying Docker to AWS Elastic Beanstalk -- how to forward port to host? (port binding)

I have a project set up with CircleCI that I am using to auto-deploy to Elastic Beanstalk. My EBS environment is a single container, auto-scaling, web environment. I am trying to run a service that listens on raw socket port 8080.
My Dockerfile:
FROM golang:1.4.2
...
EXPOSE 8080
My Dockerrun.aws.json.template:
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "<bucket>",
"Key": "<key>"
},
"Image": {
"Name": "project/hello:<TAG>",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
I have made sure to expose port 8080 on the "role" assigned to my project environment.
I used the exact deployment script from the CircleCI tutorial linked above (except with changed names).
Within the EC2 instance that is running my EBS application, I can see that the Docker container has run successfully, except that Docker did not forward the exposed port to the host container. I have encountered this in the past when I ran docker run .... without the -P flag.
Here is an example session after SSH-ing into the machine:
[ec2-user#ip-xxx-xx-xx-xx ~]$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a036bb061aea aws_beanstalk/staging-app:latest "/bin/sh -c 'go run 3 days ago Up 3 days 8080/tcp boring_hoover
[ec2-user#ip-xxx-xx-xx-xx ~]$ curl localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused
What I expect to see is the ->8080 or whatever in the container that forwards it onto the host.
When I do docker inspect on my container, I also see that these two configurations are not what I want:
"PortBindings": {},
"PublishAllPorts": false,
How can I trigger a port binding in my application?
Thanks in advance.
It turns out I made a misunderstanding in how Docker's networking stack works. When a port is exposed but not published, it is still available to the local network interface through the Docker container's private IP address. You can obtain this IP address by checking docker inspect <container>.
Rather than doing curl localhost:8080 I could do curl <containerIP>:8080.
In my EBS deploy, nginx was automatically setup to forward (HTTP) traffic from Port 80 to this internal private port as well.
I had the same problem in a rails container (port 3000 using puma) by default rails server only binds localhost to the listening interface, I had to use -b option to bind 0.0.0.0 and that solved the problem.
In react I have no the same problem cause npm serve package binds all interfaces by default