In AWS, access control by ssh proxy + sshd - amazon-web-services

In AWS, our users(system admins) can access internal zone DB servers by using SSH tunneling without any local firwall's restrictions.
As you know, to access internal node a user must go through public zone gateway server first.
Because the gateway is actually a passage, I wish control the traffic from tunneled users on the gateway server.
For example, to get the currently connected ip addresses of all clients, to idendify the internal path(eg DB server ip) the user accessed futhermore I wish control the connection of unauthorized users.
To my dreams come true, I think below idea is really ideal.
1) Change sshd port to something other than 22. Restart sshd daemon.
2) Locate ssh proxy(nginx, haproxy or else) prior to sshd and let the proxy get the all ssh traffic from clients.
3) The ssh proxy route the traffic to sshd
4) Then I can see all user's activity by analize ssh proxy log. That's it.
Is it possible dream ?

Clever, but with a critical flaw: you won't gain any new information.
Why? The first S in SSH: "secure."
The "ssh proxy" you envision would be unable to tell you anything about what's going on inside the SSH connections, which is where the tunnels are negotiated. The connections are encrypted, of course, and a significant point of SSH is that it can't be sniffed. The fact that the ssh proxy is on the same machine makes no difference. If it could be sniffed, it wouldn't be secure.
All your SSH proxy could tell you is that an inbound connection was made from a client computer, and syslog already tells you that.
In a real sense, it would not be an "ssh proxy" at all -- it would only be a naïve TCP connection proxy on the inbound connection.
So you wouldn't be able to learn any new information with this approach.
It sounds like what you need is for your ssh daemon, presumably openssh, to log the tunnel connections established by the connecting users.
This blog post (which you will, ironically, need to bypass an invalid SSL certificate in order to view) was mentioned at Server Fault and shows what appears to be a simple modification to the openssh source code to log the information you want: who set up a tunnel, and to where.
Or, enable some debug-level logging on sshd.
So, to me, it seems like the extra TCP proxy is superfluous -- you just need the process doing the actual tunnels (sshd) to log what it is doing or being requested to do.

Related

Cloud SQL instance connectivity with Open VPN

I am trying to explore a way to connect postgres 13 cloud sql instance with only private IP from my local windows machine ..
I am able to connect through a compute instance tied with the same subnet as a default GCP behavior..
I want it to be secure my cloud instance to be accessible only through a VPN setup.. I have a Open VPN CE installed and whitelisted the Open VPN ip in the GCP firewall rule. Still getting the error message.
psql: error: could not connect to server: Connection timed out
Is the server running on host "{ip_address}" and accepting
TCP/IP connections on port 5432?
There are methods though to connect via private IP having enabled cloud proxy. But is there a way that i can make it happen via other VPNs.?
When you receive this error most of the time it is due to not having
PostgreSQL configured to allow TCP/IP connections or at least no
connections from your particular workstation. Here is a list of
common causes to this problem:
a) postgresql.conf is not set up to allow TCP/IP connections.
You'll want to look at the listen_address configuration parameter.
b) postgresql.conf is not set up to allow connections on a
non-standard port number. To determine this look at the port
configuration option.
c) Authentication rules in PostgreSQL's access configuration file
(pg_hba.conf) are not set up to allow either your user or IP
address to connect to that database. See the
official-documentation for more information on setting up your
pg_hba.conf properly.
d) Ensure that there are no firewalls, such as iptables that are
keeping your local system from even establishing a connection to the
remote host. For common PostgreSQL problems and possible solutions,
check here.
You have to edit the postgresql.conf file and change the line
with 'listen_addresses'. This file you can find in the
/etc/postgresql/13/main directory. To connect PostgreSQL server from other computers, you have change this config line in this way:
listen_addresses = '*'
Then you have to edit the pg_hba.conf file, too. In this file you
have set, from which computers you can connect to this server and
what method of authentication you can use. Usually you will need
similar line:
host all all <IP address> md5
For detailed steps, you can check here.
Finally i had to set the cloud SQL proxy on a f1-micro VM instance having only private IP ..
Whitelisted the port 5432 on the firewall rule.
From table plus i use the private IP of the vm instance to connect to my cloud postgress instance.
Very well If somebody has other alternatives please do let me know.

Connection refused error with AWS + Hashicorp Vault

I have configured a Hashicorp Vault server on a EC2 instance. When trying to use postman to test transit secret engine API I keep getting a error connection refused on postman, I went full ape mode and opened all ports on the security group inbound rule and it didn't work, I attached an elastic IP to the instance and didnt work either, im just trying with a simple GET and I just keep getting the same connectionrefused error.
When I use cUrl on the ssh connected session i have no issues though. The specified hosted adress is 127.0.0.1:8200, in postman I replaced that localhost with the public adress of the instance that i obviously censored in the screencap, in the headers theres the token needed to access vault, for simplicity I was just using the root token.
Postman screecap if it helps
#Emilio Marchant
I have faced similar issue (not with postman, but with telnet), Let's try to understand problem here.
The issue is with 127.0.0.1 IP. This is loopback IP and When you (or your computer) call an IP address, you are usually trying to contact another computer on the internet. However, if you call the IP address 127.0.0.1 then you are communicating with the localhost – in principle, with your own computer.
Reference link : https://www.ionos.com/digitalguide/server/know-how/localhost/
What you can try is below.
Start vault dev server with --dev-listen-address parameter.
Eg:
vault server -dev -dev-listen-address="123.456.789.1:8200"
in above command replace '123.456.789.1:8200' with '<your ec2 instance private IP : 8200'>
Next set VAULT_ADDR and VAULT_TOKEN parameter as below
export VAULT_ADDR='http://123.456.789.1:8200'
export VAULT_TOKEN='*****************'
Again replace 'http://123.456.789.1:8200' with 'http://[Your ec2 instance private IP]:8200'
For Vault_token : you should get a root token in console, when you start vault server , use that token
Now try to connect from postman or using curl command. It should work.
Reference question and solution :
How to connect to remote hashicorp vault server
The notable thing here is that the response is "connection refused". This error means that the connection is getting established and it found that there are no processes running on that port. This error means that there is no issue with firewall. A firewall will cause the connection to either drop (reject) or timeout (ignore), but won't give "Econnrefused".
The most likely issue is that the vault server process is not bound to the correct network interface. There must be a configuration in hashicorp-vault to setup the IP on which to bind. Most servers, by default, bind only on loopback address which is accessible only from 127.0.0.1. You need to bind it to "all" network interfaces by changing that to 0.0.0.0. I am not aware of the specific configuration option of hashicorp vault, but there has to be something to this effect.
Possible security issue:
Note that some servers expect you to run it behind a reverse proxy so that you can setup SSL (https) and other authentication if needed. Applications like vault servers should not be publicly accessible on http without SSL.

how does bastion know which rds instance to connect to in AWS

I am trying to set up a bastion host in AWS in order to perform administrative options on an RDS instance in a private subnet. I am following the instructions from the official documentation (https://docs.aws.amazon.com/quickstart/latest/linux-bastion/step1.html), but there it is not clear how the bastion will know which RDS instance to connect to. How would I make sure that it can 'talk to' my intended RDS? (as far as I understand, the key pair is just something I can create anytime and enter to connect to the bastion itself, but not the RDS, or am I wrong?)
The documentation you linked uses an AWS CloudFormation stack to deploy the Bastion. I'm not sure exactly what configuration it is using, so my answer will be generic, rather than applying to this specific situation.
The normal configuration is:
A database in a private subnet
A Bastion server (EC2 instance) in a public subnet
A connection is made to the Bastion, which then allows an on-connection to the database
There are a number of ways of connecting to the database through the Bastion. Here's one that I use:
ssh -i key.pem ec2-user#BASTION-IP -L 3306:DATABASE-DNS-NAME:3306
This tells the SSH connection to forward any traffic sent to my local port 3306 (the first number), through the SSH connection, but then send it to DATABASE-DNS-NAME:3306 (the database server). Any response from the database will come back the same way.
Then, when I wish to refer to the database from my computer, I reference:
localhost:3306
It appears that the database is on my own computer, but the traffic is actually sent across SSH to the Bastion, then onto the database.
There are newer and better ways of doing this forwarding that other people might (hopefully) add as a comment or another answer, but this is the way I make my connections through a Bastion.
Fun fact: A Bastion is the bit of a castle wall that sticks out, allowing defenders to shoot arrows at attackers attempting to climb the wall. In a similar way, the Bastion Server sticks out into the Internet, beyond the protected part of the network.

Request time out when pinging server on AWS

In order to check the health of a server I have, I want to write a function I can call in order to check whether my service is online.
I used command prompt to ping the IP address of the server, however all of the packets were lost due to request time outs.
I'm guessing I don't need to have a dedicated function related to handle being pinged, and I believe that it is due to the server security protocols denying the request. Currently the server only allows inbound traffic of HTTP requests, and I believe this to be the problem.
For an AWS instance, what protocol rule do I need to add in order to accept ping requests?
In the Security Group for the EC2 instance you should allow inbound ICMP.

Accessing n tier database with Navicat

We just made our web system more secure by converting a single web server/database server into a 2 tier system with the webserver in front of the database server. The webserver has 2 NIC's, one for the outside world and one for an internal network. The database server has one NIC for the inside network.
In the old days, I could use Navicat's SSH feature to connect to the single websever/database server. Now the database server is hidden.
Using the command line I can ssh to webserver and then ssh into database server. But I miss my graphical tools. Is there any way to get Navicat to connect to the database server? Is there something I can set up on the webserver that will proxy to the database?
Short answer: You shouldn't connect to the database server through the web server. Yes, there are ways you could set this up, but I wouldn't recommend it if your goal is increased security.
There ought to be a way for you to VPN in to the internal network, and then ssh to both hosts from there. The security benefit is largely in reducing the attack surface on your externally accessible machines, so you'd be better off turning off ssh entirely on the external interface, then VPN-ing in to the internal network (which I hope is firewalled to only allow database traffic between the two servers, not that the web server has a NIC that's on your internal network!) Once you're on the internal network you can have Navicat connect directly to the server, without the need for ssh tunneling. (Obviously you'd need to set the firewall policies on your VPN tunnel correctly to allow this.)
If this setup is not possible, such as if you're using a low-end shared webhost, see these instructions to set up an HTTP Tunneling connection through the webhost. I really would recommend using the VPN solution if you can, but if you can't, HTTP Tunneling is the most secure way to support connecting directly through the web server to the db server.