vpn through google cloud gce - google-cloud-platform

I am trying to setup a small GCE to use as a VPN gateway when I travel. My current attempt is to setup OpenVPN on Ubuntu 16.04 using the script at https://www.cyberciti.biz/faq/howto-setup-openvpn-server-on-ubuntu-linux-14-04-or-16-04-lts/. I was able to run the script and get an ovpn file at the end. On my laptop (Kubuntu 16.04) I was able to connect to the GCE machine using network manager by importing the ovpn file.
On my laptop I can see the connection is successful but I am not getting any data through the connection (ie I can't go to any websites in my browser).
What I am not sure is if I need to make changes to the Google Network Firewall based on the comments in the link:
#OpenVPN Forward by vg
-A ufw-before-forward -m state --state RELATED,ESTABLISHED -j ACCEPT
-A ufw-before-forward -s 10.8.0.0/24 -j ACCEPT
-A ufw-before-forward -i tun+ -j ACCEPT
-A ufw-before-forward -i tap+ -j ACCEPT
#OpenVPN END by vg
So, if something like that is needed, how do I setup the commands in the Google Network Firewall?
I do have the 1194 port pointing to my GCE instance which is why I assume I am able to see the VPN connect.

I've made a quick test in my own project, and it works fine.
In my case I don't have any firewall rule, running in the openvpn-server.
Keep in mind that you need to configure your internal ip as IP listening and set the public IP when the program asks you.
In your example they use the same IP.
Also I have set a rule Firewall rule in the Google Cloud project allowing the inbound traffic for the port 1194

Related

AWS - Unable to ping GNS3 router from another server

We want to have a Test cloud virtual network that allows us to make an snmp-get over multiple virtual devices. To achieve this I am using GNS3. Now, we just deployed a GNS3 Server on EC2 (Ubuntu 18), but we can't ping nor snmp get any router outside the GNS3 server. We can ping these devices while we are in the GNS3 server, but this does not work from another server or my computer.
The GNS3 server already created and deployed.
The VPG, Site to site VPN, and VPC are already created, and the servers were added to this VPC.
After some weeks of research, our team found the solution, if anyone is having this same problem consider these important points in your AWS configuration:
Server A (GNS3) must be in a different Subnet than Server B (Test server that you want to ping from).
A Route Table must be created in AWS config pointing to the GNS3 ips.
Configure the NAT in Server A (In my case is an Ubuntu 18) using the following instructions:
Set up IP FORWARDing and Masquerading
iptables --table nat --append POSTROUTING --out-interface ens5 -j MASQUERADE
iptables --append FORWARD --in-interface virbr0 -j ACCEPT
Enables packet forwarding by kernel
echo 1 > /proc/sys/net/ipv4/ip_forward
Apply the configuration
service iptables restart
This will allow your virtual GNS3 devices in Server A to be reached from Server B (A more detailed explanation here). Additionally, you might want to test an SNMP-WALK from Server B to your virtual device in Server A (a router in my case).
If this does not work try debugging using flow logs in AWS and looking if server A is effectively receiving the requests.

SSH Port forwarding / Tunneling with multiple hops

Background
Three subnets exist in an AZ in AWS. Two of them are private and one is public.
The Public Subnet has a Jumpbox which can be connected to from my local machine via ssh using a pem file (Sample - ssh -i my-key-file.pem ec2-user#host1
The First private subnet has an EC2 Instance that acts as a Application Server. It can only be reached from the Jumbox via ssh. The same pem file is used here. (Sample - ssh -i my-key-file.pem ec2-user#host2). This command is executed on host1.
The second private subnet hosts an Oracle Instance using AWS RDS Service. It is running on port 1521. The DB Can only be accessed from the App Server/host2.
How I am working currently
host2 has sqlplus client installed.
First,I connect to host1, then to host2, and then execute sqlplus to execute Queries at the Command line (No GUI).
I am planning to use GUI tools like SQL Developer to connect right from my local machine. I thought using Port forwarding/SSH Tunneling It can be achieved.
I tried using different options, but with no success. The following links are useful:
https://superuser.com/questions/96489/an-ssh-tunnel-via-multiple-hops
https://rufflewind.com/2014-03-02/ssh-port-forwarding
My Approach to SSH Tunneling
ssh -N -L 9999:127.0.0.1:1234 ec2-user#host1 -i my-key-file.pem -v -v -v
This is executed on my local machine.
It does not do much as I can already connect to host1 using ssh. I did not know how to forward many levels. Using this host as my first hop. After this, ssh listens on port 9999 which is Local to my machine. It forwards any traffic to host1 to Port 1234. My assumption is that, If I use sqlplus on my local machine connecting to localhost:9999, the traffic will arrive at host1:1234
I used 127.0.0.1 because the target of SSH tunneling is with respect to the SSH Server, which is host1. Basically, Both Target, SSH Server are on the same host.
ssh -N -L 1234:db-host:1521 ec2-user#host2 -i my-key-file.pem -v -v- v
This is executed on host1
After this, ssh forwards any incoming traffic on port 1234 to target host (DB Host)/1521 using host2 as the Tunnel.
Again, my assumption is that, ssh is listening on port 1234 on host1. Any traffic arriving from anywhere will be delivered to DB Host using host2 as the tunnel.
I executed both commands and did not see any error. I verified which ports are listening using netstat -tulpn | grep LISTEN.
After these two, My plan was to connect to the Database using Hostname as localhost and Port number as 9999.
What's going wrong !
But when I try to connect to the DB from my local machine, getting an error from my SQL Client Got minus one from a read call. I could not understand the Debug messages from ssh logs.
I believe my understanding of how port forwarding works might not be right. Any inputs would be helpful.
Thanks for your time !

Can't connect to remote instance via ssh tunneling + proxying

Having trouble connecting to remote server via ssh tunneling.
I'm not that experienced with ssh or portforwarding. I'm trying to forward traffic from an application on a remote lab server to a port on my laptop so I can monitor the traffic. I can log into the server without a problem using:
ssh -i ~/.ssh/mykey.pem username#server.com
However, when I try to create a tunnel (which I am routing through a proxy server via SwitchyOmega):
ssh -L 3128:localhost:8888 -N -i ~/.ssh/mykey.pem username#server.com
I still can't access the page.
My OS is El Capitan and I'm using Chrome, but the remote server is running ubuntu. I appreciate any advice or suggested reading!
EDIT: Initially thought the server was on AWS with a fixed IP, but it turns out its a physical lab server.
You need to create the fowarding accessible by others, therefore do not bind localhost, but the external IP or *. Also you need to specify the -g switch, if you are connecting to the forwarded port remotely:
ssh -g -L *:3128:localhost:8888 -N -i ~/.ssh/mykey.pem username#server.com
In a new terminal window on your local machine, SSH into the remote machine using the following options to setup port forwarding.
ssh -N -L 3128:localhost:8888 user#remote_server
-N options tells SSH that no commands will be run and it’s useful for port forwarding, and -L lists the port forwarding configuration that we setup.
To close the SSH tunnel simply do ctrl-c.

Can you connect to Amazon ElastiСache Redis outside of Amazon?

I'm able to connect to an ElastiCache Redis instance in a VPC from EC2 instances. But I would like to know if there is a way to connect to an ElastiCache Redis node outside of Amazon EC2 instances, such as from my local dev setup or VPS instances provided by other vendors.
Currently when trying from my local set up:
redis-cli -h my-node-endpoint -p 6379
I only get a timeout after some time.
SSH port forwarding should do the trick. Try running this from you client.
ssh -f -N -L 6379:<your redis node endpoint>:6379 <your EC2 node that you use to connect to redis>
Then from your client
redis-cli -h 127.0.0.1 -p 6379
It works for me.
Please note that default port for redis is 6379 not 6739. And also make sure you allow the security group of the EC2 node that you are using to connect to your redis instance into your Cache security group.
Also, AWS now supports accessing your cluster more info here
Update 2018
The previous answer was accurate when written, however it is now possible with some configuration to access redis cache from outside using the directions according to Accessing ElastiCache Resources from Outside AWS
Old Answer
No, you can't without resorting to 'tricks' such as a tunnel, which maybe OK for testing but will kill any real benefit of using a super-fast cache with the added latency/overhead.
The Old FAQ under How is using Amazon ElastiCache inside a VPC different from using it outside?:
An Amazon ElastiCache Cluster, inside or outside a VPC, is never allowed to be accessed from the Internet
However, this language has been removed in the current faq
These answers are out of date.
You can access elastic-cache outside of AWS by following these steps:
Create a NAT instance in the same VPC as your cache cluster but in a
public subnet.
Create security group rules for the cache cluster and
NAT instance.
Validate the rules.
Add an iptables rule to the NAT
instance.
Confirm that the trusted client is able to connect to the
cluster.
Save the iptables configuration.
For a more detailed description see the aws guide:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html#access-from-outside-aws
Not so old question, I ran to the same issue myself and solved it:
Sometimes, for developing reasons you need to access from outside (to avoid multi-deployments just for a simple bug-fix maybe?)
Amazon have published a new guide that uses the EC2 as proxies for the outside world:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html#access-from-outside-aws
Good luck!
BTW if anyone wants a windows EC2 solution, try these at the DOS prompt (on said windows EC2 machine):
To Add port-forwarding
C:\Users\Administrator>netsh interface portproxy add v4tov4 listenport=6379 listenaddress=10.xxx.64.xxx connectport=6379 connectaddress=xxx.xxxxxx.ng.0001.use1.cache.amazonaws.com
To list port-forwarded ports
C:\Users\Administrator>netsh interface portproxy show all
Listen on ipv4: Connect to ipv4:
Address Port Address Port
10.xxx.128.xxx 6379 xxx.xxxxx.ng.0001.use1.cache.amazonaws.com 6379
To remove port-forwarding
C:\Users\Administrator>netsh interface portproxy delete v4tov4 listenport=6379 listenaddress=10.xxx.128.xxx
We are using HAProxy as a reserved proxy server.
Your system outside AWS ---> Internet --> HAProxy with public IP --> Amazon Redis (Elasticache)
Notice that there is another good reason to do that (at that time)
As we use node.js client, which don't support Amazon DNS fail over, the client driver don't support dns look up again.
If the redis fail, the client driver will keep connect to the old master, which is slave after failed over.
By using HAProxy, it solved that problem.
Now using the latest ioredis driver, it support amazon dns failover.
This is a solid node script that will do all the dirty work for you. Tested and verified it worked.
https://www.npmjs.com/package/uzys-elasticache-tunnel
How to use
Usage: uzys-elasticache-tunnel [options] [command]
Commands:
start [filename] start tunneling with configuration file (default: config.json)
stop stop tunneling
status show tunneling status
Options:
-h, --help output usage information
-V, --version output the version number
Usage Example
start - uzys-elasticache-tunnel start ./config.json
stop - uzys-elasticache-tunnel stop
status - uzys-elasticache-tunnel status
Its is not possible to directly access the classic-cluster from a VPC instance. The workaround would be configuring NAT on the classic instance.
NAT need to have a simple tcp proxy
YourIP=1.2.3.4
YourPort=80
TargetIP=2.3.4.5
TargetPort=22
iptables -t nat -A PREROUTING --dst $YourIP -p tcp --dport $YourPort -j DNAT \
--to-destination $TargetIP:$TargetPort
iptables -t nat -A POSTROUTING -p tcp --dst $TargetIP --dport $TargetPort -j SNAT \
--to-source $YourIP
iptables -t nat -A OUTPUT --dst $YourIP -p tcp --dport $YourPort -j DNAT \
--to-destination $TargetIP:$TargetPort
I resolved using this amazon docs it says you ll have to install stunnel in your another ec2 machine.
https://aws.amazon.com/premiumsupport/knowledge-center/elasticache-connect-redis-node/

How to launch an amazon ec2 instance inside VPC using Chef?

This is a question primarily about Chef. When looking into controlling nodes inside Amazon VPC with Chef, I run into some difficulties, mainly that a node that does not have an external IP address is not easily reachable by chef.
I went through the basic tutorial for scenario #2 http://docs.amazonwebservices.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html#Case2_Launch_NAT
However, this this times out:
knife ec2 server create -N app-server-1 -f m1.small -i rails-quick-start.pem -r "role[base]" -G WebServerSG -S rails-quick-start -x ubuntu -s subnet-580d7e30 -y -I ami-073ae46e -Z us-east-1d
What am I doing wrong?
In order for knife to be able to talk to the server you may need to set up a VPN. If your VPC is already connected to your local network via a VPN then it should work but if not you might want to run an OpenVPN server or something similar.
You can also set up servers in two other ways:
Create an EC2 instance and let it boot up. Then run knife bootstrap against it.
Create an EC2 instance with the proper user data and have cloud-init set it up (if you are running say ubuntu with includes cloud-init).
The solution was to setup a tunnel and tunnel the ssh on some port of a publicly visible computer to all the other computers in the cloud. So my load balancer serves http traffic on socket 80, is accessible via socket 22, and uses sockets 2222, 2223, 2224, ... to tunnel ssh to non-public cloud instances. On load balancer (or any public instance) run:
ncat --sh-exec "ncat PRIVATE.SUBNET.IP 22" -l 2222 &
for example:
ncat --sh-exec "ncat 10.0.1.1 22" -l 2222 &
There needs to be a way to associate an Elastic IP to the instance in order to get a public IP for easy access and then do all the bootstrapping and SSH activities through the EIP.