Multiple SSH tunnel hops with DBeaver - amazon-web-services

Can DBeaver create two SSH tunnels and then connect to a database?
I have successfully created one SSH tunnel but not two.
I am trying to connect to an AWS RDS database via Bastion host. Bastion host only allows SSH access from my corporate IP range.
This means that when I am in the office I can connect to the RDS from DBeaver just fine:
My computer is in the allowed IP range
DBeaver creates an SSH tunnel to a Bastion host in my VPC inside the AWS cloud
DBeaver connects to the RDS database
The issue arises when I work from home.
I would have to add "zero" step to have an allowed IP address for the Bastion host connection:
0) Connect to the machine inside the office
I have not yet managed to achieve this. Has anyone got an idea of how to do this?

Kudos to #erik258 for pointing me in the right direction.
I have created an SSH tunnel between an office machine and the Bastion host. When in the office machine, when I access http://localhost:<local_port> I am in effect communicating with the <RDS_endpoint> on port <remote_port>.
Steps:
Create an SSH tunnel from the office machine to the Bastion host. Source
$ ssh -L <local_port>:<RDS_endpoint>:<remote_port> -i <path_to_ssh_key> ec2-user#<Bastion_host public IP>
<local_port> - random port
<remote_port> - port RDS endpoint listens to (5432 for PostgreSQL)
<RDS_endpoint> - endpoint specified on the AWS RDS page
Create DBeaver connection. In the "SSH" section specify your office machine. In "Main" section, set "Host" as localhost, and "Port" as <local_port>.

Related

Connection error of AWS Redshift to local computer

I tried to connect Amazon Redshift to my local computer using pycopg2. However, I got an error message:
psycopg2.OperationalError: could not connect to server: Operation timed out. Is the server running on host xxx and accepting TCP/IP connecitons on posrt 5439
I have done two guides with searching google:
Changed the Publicly Accessible setting as enable, and
Add 0.0.0.0/0 and ::/0 to VPC route as gateway.
It still doesn't work. Please let me know if you know what the problem is.
Things to check:
Check the Security Group associated with the Redshift cluster and confirm that it permits access on port 5439 from your IP address
Check that the Redshift cluster was launched in a Public Subnet (with the Route Table for that subnet pointing to 0.0.0.0/0 to the Internet Gateway)
Make sure you are connecting by using the DNS Name (If you ping the DNS Name, does it resolve to an IP address?)
Try going via a different network (eg home vs office vs tethered via your phone)?

Connection failure using EC2 Instance Connect (browser-based SSH connection)

Launching an AWS EC2 instance seems quite straightforward although when it comes to connecting to the newly launched instance things get sticky. The process for connecting to an instance proposed by such a tech giant is very counter-intuitive.
As a short reminder I should add that an "instance" is technically a virtual machine running on Amazon's Elastic Compute Cloud (EC2), for more info one could have a look at this link.
The ec2 instance referred to in this discussion is Ubuntu Server 20.04 LTS (HVM).
The instruction for working with EC2 Linux instances is given here.
AWS EC2 proposes three different ways of connecting to an instance:
EC2 Instance connect (browser-based SSH connection),
Session Manager
SSH Client
Now with regard to connecting to the above-mentioned instance there are only certain connections that establish correctly and the rest of the proposed methods fail, here is the list of connection successes and failures :
Ubuntu instance, security group source "Custom=0.0.0.0/0", Connection establishes using both EC2 Instance Connect (browser-based SSH connection) and SSH client.
Ubuntu instance, security group source "My IP=$IP", Connection establishes only using SSH client (terminal on Ubuntu and PuTTY on windows) and not using EC2 instance connect.
Both above cases have been tried on Ubuntu 20.04 and Windows 10 as local machine and the problem remains similar on both machines. I went through most of the failure cases discussed in the troubleshooting documents proposed here and verified them on my instance. Yet the problem persists. I should also add that I never tried "session manager" connection method although opening its tab already would give some info about "not installed" agents and features.
Any idea regarding this problem? Somebody out there facing the same issue?
From Docs
(Amazon EC2 console browser-based client) We recommend that your instance allows inbound SSH traffic from the recommended IP block published for the service.
Reason for this -> EC2 Instance Connect works by making an HTTPS connection between your web browser and the backend EC2 Instance Connect service on aws. Then, EC2 Instance Connect establishes a "mostly normal" SSH connection to the target instance in other words the request is going from backend ec2 instance connect and not your browser that is why it needs IP address from accepted ranges of that region .
Browser based EC2 Instance Connect uses specific IP ranges for browser-based SSH connections to your instance. These IP ranges differ between AWS Regions. To find the AWS IP address range for EC2 Instance Connect in a specific Region, use the following( just replace your region with your region) ( for Linux required curl and jq as prerequisite)
curl -s https://ip-ranges.amazonaws.com/ip-ranges.json| jq -r '.prefixes[] | select(.region=="Your region") | select(.service=="EC2_INSTANCE_CONNECT") | .ip_prefix'
whatever the value is returned just add up to your security rule and it will work.
Ubuntu instance, security group source "Custom=0.0.0.0/0", Connection establishes using both EC2 Instance Connect (browser-based SSH connection) and SSH client.
this works because 0.0.0.0/0 allows connection from all the IP ranges( which includes your region IP too).
for more details try reading this troubleshoot

Cannot ssh into Spark worker

There are 8 failed tasks in a particular executor. I want to connect to it via ssh to view the yarn logs.
The executor address is: ip-123-45-6-78.us-west-2.compute.internal:34265
I've tried both:
ssh ip-123-45-6-78.us-west-2.compute.internal:34265
and
ssh ip-123-45-6-78.us-west-2.compute.internal
But both produce the following error:
Could not resolve hostname ip-123-45-6-78.us-west-2.compute.internal:
Name or service not known
I've also added to the .ssh/config file the same key-pair I use to connect to the master:
Host master
HostName ec2-09-876-543-21.us-west-2.compute.amazonaws.com
User hadoop
IdentityFile ~/keypair.pem
Host worker
HostName ip-123-45-6-78.us-west-2.compute.internal
User hadoop
IdentityFile ~/keypair.pem
And also both ssh worker and ssh worker:34265 don't work.
Just to be clear: ssh master does work!
The Spark application is running on an EMR cluster.
From the hostname *.compute.internal these are internal IP address (private IP) and you can not ssh from your local system.
You are able to SSH to master because you are using public IP address of the master instance. try to use the public IP address for the worker too and it should work.
Or the option is to create ssh-tunnel through the master server, you can try something like
Host worker
HostName ip-123-45-6-78.us-west-2.compute.internal
User hadoop
IdentityFile ~/keypair.pem
ProxyCommand ssh master -W %h:%p
The hostname you're trying to connect to will not resolve as you're outside of the AWS VPC. Private records (those as part of the compute.internal domain) only resolve if the DNS of the network goes through the Route 53 Private Resolver.
If you're not to worried about resolving the DNS hostnames you can instead attempt connecting via the private IP directly (assuming you have access via either a VPN connection or Direct Connect). Alternatively connect via an instance that has public ingress i.e. Client -> Jump Server -> Private Host.
If you do want to resolve via private domain name the following are the best options:
Inbound Resolver
Simple AD
Setup an EC2 based DNS server in your VPC.

How to connect AWS EC2 private IP with filezilla

I am currently working on an AWS EC# LINUX AMI. I have a private IP. Is it possible to access that private IP with filezilla to transfer files. i am unable to do so.
For access an EC2 machine with private IP, you need to setup your own VPN server. If you already have VPN setup in your AWS cloud then you just need to install a VPN client and login with your credential and you will be able to access EC2 machine or transfer files using filezilla with private IP too. I am assuming that you haven't setup VPN server. you may use AMI of OPENVPN from AWS market place for setup VPN. Below is the good link for getting start.
https://docs.openvpn.net/how-to-tutorialsguides/virtual-platforms/amazon-ec2-appliance-ami-quick-start-guide/
After complete this you have to install OPENVPN in your machine and after Login with your credentials your will able to access your EC2 instance with private IP.
Below is the link for install OPENVPN in Ubuntu machine. For different operating system you can explore site.
https://docs.openvpn.net/getting-started/how-to-install-openvpn-as-software/
OPENVPN is one of the alternative, you can use other also as per your need.
Using 2 ways you can do this
Create a bastion host which will connect to the private instance
Using a port forwarding means tunnelling.
If you are using bastion host for connecting private ec2 instance then this steps will be useful
Using Filezilla to transfer files to a private ec2 instance through a bastion host:-
Note: Keep Pem file same of bastion host and private ec2 instance.
Open terminal or cmd(linux terminal i.e gitbash)
we are connecting to the AWS EC2 instance with one terminal command.
ssh -N -L 1234:<private_instance_ip or Private_DNS>:22 -i <Pem_File> #<Bastion_host_public_ip>
e.g.
ssh -N -L 1234: ip-171-12-21-208.us-east-1.compute.internal:22 -i app_prod.pem ubuntu#ec2-31-92-123-22.us-east-1.compute.amazonaws.com
Note: - For the first time when you enter this command it will ask for Are you sure you want to continue connecting - yes
3.Keep this terminal or cmd open.
If you close this session then the connection is broken
4.Open “FileZilla” application and on “Edit” section -> Click on “Settings”
5.On “Settings” page -> Click on “SFTP” and add PEM file of ec2 instance and click on “OK”
6.Add below entries:-
Host:- 127.0.0.1 or sftp://127.0.0.1
Username:- <your_user>
Password:- Keep empty
Port:- 1234
7.Click on Quick Connect.
Once the connection is established then you can easily transfer files from local to private instance.
See- scp-to-transfer-files-to-a-private-ec2-instance-through-a-bastion-host
https://www.davidbegin.com/using-scp-to-transfer-files-to-a-private-ec2-instance-through-a-bastion-host/

How to connect to memcached server from outside?

I installed memcached on AWS EC2 Ubuntu, and I can connect it by telnet in the server:
telnet localhost 11211
But how can I connect it from other machine? I know the interval ip is 172.31.17.208, but when I try to connect it from another EC2 by:
telnet 172.31.17.208 11211
the response is
Could not open connection to the host, on port 11211: connect failed.
You will need a Public IP/ Elastic IP if you want to access your Memcache from outside of the AWS.
Your internal IP will work within the VPC and not outside of your VPC. I am guessing the another instance that you are trying to access is not in the same VPC. Try pinging your Memcache server from another instance and check if it is resolved using internal IP.
Edit:
Apart from this, you might need to check your security group and make sure the ports are open for incoming connection.