I have a RDS instance with mysql database and which can only be accessed by an ec2 instance running in AWS. Now i want to access my RDS instance from local machine using SSH tunneling. I searched a lot on the net but none of the solutions worked. Can any one please tell me how to do it step by step with working solution.
Any help will be highly appreciated!
I tried to run -
ssh -i myNewKey.pem -N -L 3306:myredinstance:3306 ec2-user#myec2-instance.
mysql -u dbauser -p -h 127.0.0.1 on mysql-js utility and it gave me error. Please see below :-
You can do it by setting up a ssh tunel
ssh -i /path/to/key -N -L 3306:an_rds_endpoint:3306 user#yourserver.com
Then connect locally
mysql -u myuser -p -h 127.0.0.1
Related
I want to be able to use kubectl commands on my master ec2 instance from my local machine without ssh. I tried copying .kube into my local but the problem is that my kubeconfig is using the private network and so when i try to run kubectl from my local I can not connect.
Here is what I tried:
user#somehost:~/aws$ scp -r -i some-key.pem ubuntu#some.ip.0.0:.kube/ .
user#somehost:~/aws$ cp -r .kube $HOME/
user#somehost:~/aws$ kubectl version
and I got:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp some.other.ip.0:6443: i/o timeout
Is there a way to change the kubeconfig in a way that would tell my kubectl when I run commands from local to run them on the master on the ec2 instance?
You have to change clusters.cluster.server key in your kubectl config with externally accessible IP.
For this your VM with master node must have external IP assigned.
Depending on how you provisioned your cluster, you may need to add additional name to Kubernetes API server certificate
With kubeadm you can just reset cluster with
kubeadm reset
on all nodes (including master), and then
kubeadm init --apiserver-cert-extra-sans=<master external IP>
Alternatively you can issue your commands with --insecure-skip-tls-verify flag. E.g.
kubectl --insecure-skip-tls-verify get pods
I'm trying to connect my local node here:
https://polkadot.js.org/apps/
I go to the top left coner and choosing "local node "
However I do not understand where to enter ip and port for my node.
How to do that?
Enable ssh tunneling to the server.
The first step is to enable ssh tunnel from the local host to the remote server where your parastate node has been hosted . The below command enable ssh tunnel to remote host xx.xx.xx.xx on port 9944 which is needed for polakdot.js.org endpoint:
ssh -N -L 9944:127.0.0.1:9944 -i C:\Users\sshkey\privatekey.ppk root#xx.xx.xx.xx
In case you are not using ssh key and would like to connect using traditional method by supplying password, use the below option
ssh -N -L 9944:127.0.0.1:9944 root#xx.xx.xx.xx
The below command enable ssh tunnel to remote host xx.xx.xx.xx on port 9944 which is needed for polakdot.js.org endpoint
ssh -N -L 9944:127.0.0.1:9944 -i C:\Users\sshkey\privatekey.ppk root#xx.xx.xx.xx
In case you are not using ssh key and would like to connect using traditional method by supplying password , use the below option
ssh -N -L 9944:127.0.0.1:9944 root#xx.xx.xx.xx
Enter the password when prompted/
You will just have empty hanging window . Don’t panic you are all good.
How can I connect to a bastion server in a AWS VPC using Ansible 2.x to perform a Docker swarm setup? I've seen this question and the official FAQ.
Already tried providing the following via --extra-vars:
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#my.bastion.server.com"' or even using ansible.cfg with the parameter above, or even something like:
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=600s -J ec2-
user#my.bastion.dns.com
I tried a lot of combinations but I’m always getting this error message running a ping command in a playbook:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the
host via ssh: ssh: connect to host 10.1.xx.xx port 22: Operation timed
out\r\n",
Probably worth mentioning that:
I’m able to connect to the private hosts in my VPC normally using ssh -J option, example: ssh -J user#my.bastion.server.com user#vpc.host.private.ip .
I’m using Ansible’s ec2.py dynamic inventory with ec2.ini configured to map the private ips for a given tag entry.
It was a ssh misconfiguration problem.
I was able to fix with the configuration with those parameters.
1) Ansible.cfg file
[ssh_connection]
ssh_args = -o ProxyCommand="ssh -W %h:%p -q $BASTION_USER#$BASTION_HOST" -o ControlPersist=600s
control_path=%(directory)s/%%h-%%r
pipelining = True
2) Ec2.ini file
[ec2]
regions = us-xxxx-x
destination_variable = private_ip_address
vpc_destination_variable = private_ip_address
3) Playbook Execution Command
export BASTION_USER=xxx-xxxx;
export BASTION_HOST=ec2-xx-xx-xx-xx.xxxxx.compute.amazonaws.com;
ansible-playbook -u ec2-xxxx \
-i ./inventory/ec2.py \
./playbook/ping.yml \
--extra-vars \
"var_hosts=tag_Name_test_private ansible_ssh_private_key_file=~/.ssh/my-test-key.pem" -vvv
And voila!
I'm trying to use Fabric to connect to several nodes (Ubuntu VMs), but I can not reach all nodes from VM where Fabric is installed. Instead I need to go first to one specific node called entry point and from this entry point to another VM where from that all VMs are reachable. Please see figure below. Any suggestions on how to use fabric to achieve this?
Network Architecture
With more than 1 "jump" the easiest way to achieve it is by letting Fabric read your ~/.ssh/config's ProxyCommand directives (or equivalent).
Have a look at documentation.
In your configuration file you should have something like the following:
Host entryPoint
HostName your-entrypoint-hostname-or-ipaddress
Host VM0
ProxyCommand ssh -q -W %h:%p entryPoint
Host VM1 VM2 VMN
ProxyCommand ssh -q -W %h:%p VM0
For a single jump you may consider using env.gateway instead.
A slight variation, using nc:
Host entryPoint
HostName your-entrypoint-hostname-or-ipaddress
Host VM0
ProxyCommand ssh -q entryPoint nc -q0 %h:%p
Host VM1 VM2 VMN
ProxyCommand ssh -q VM0 nc -q0 %h:%p
I'd like my EC2 instance to have IAM-based permissions, but don't want the docker containers on that instance to have the same permissions. I believe it should be sufficient to block access to the magic IP 169.254.169.254. Is it sufficient to run:
iptables -I DOCKER -s 169.254.169.254 -j DROP
Do I also need to configure my docker daemon with --icc=false or --iptables=false?
Finally got this working, you need to add this rule on the host machine:
1) Drop docker bridge packets when outbound to 169.254.169.254 port 80 or 443.
sudo iptables -I FORWARD -i docker0 -d 169.254.169.254 \
-p tcp -m multiport --dports 80,443 -j DROP
Now, if I try to connect inside the container:
$ sudo docker run -it ubuntu bash
root#8dc525dc5a04:/# curl -I https://www.google.com
HTTP/1.1 200 OK
root#8dc525dc5a04:/# curl -I http://169.254.169.254/
# <-- hangs indefinitely, which is what we want
Connections to the special IP still work from the host machine, but not from inside containers.
Note: my use case is for Google Compute Engine and prevents Docker containers from accessing the metadata server on 169.254.169.254, while still allowing DNS and other queries against that same IP. Your mileage may vary on AWS.
I would recommend the following variation on the accepted answer:
sudo iptables \
--insert DOCKER-USER \
--destination 169.254.169.254 \
--jump REJECT
The reason for this is that the above command adds the rule to the DOCKER-USER chain which Docker is guaranteed not to modify.
Sources:
https://ops.tips/blog/blocking-docker-containers-from-ec2-metadata/
https://docs.docker.com/network/iptables/