Access bosh-lite from remote machine - cloud-foundry

I install a bosh-lite on my physical machine1, and then I want to use boshcli to access the bosh director on another machine2. machine2 and machine1 can connected.
I tried to use iptables snat ,but not work. How to do it?

Make sure you can first connect from the same local machine using the CLI. If that works, make sure you can then connect to the machine1 IP from machine2 (not bosh-lite, but host). If that works you need to add the route on machine 2 to tell it to send traffic for bosh-lite vm IPs to machine 1 ips.
sudo route add -net 192.168.100.0/24 192.168.50.4
where the first range is network range defined in bosh manifest, and 2nd argument is the gateway, likely machine 1 IP.

Related

gcloud VM - This site can’t be reached

I have created n1-standard-1 (1 vCPU, 3.75 GB memory) VM and installed LAMP on it with a static IP address. When I am trying to hit the static IP address in browser, it says This site can’t be reached However I have checked firewall rules and port 80 is opened.
Below is the output of gcloud compute firewall-rules list command -
And the output of telnet is as -
Is there anything else I need to do to open port 80 and 443?
Please help, thank you!!
This could be the VM's configuration. You'll want to check that the machine is actually listening on that port. You may have installed LAMP but are the services started, for instance? Best way to do that is SSH into the system and curl localhost. If the curl fails, you know the services are not listening on that port.
After that check that you can access the system from the VPC if you can, for example via another system in the same VPC, run curl <machine>. If that doesn't work, you may find the system is only listening on 127.0.0.1 or has other settings blocking connections from other machines.
If those steps succeed then your firewall rules are indeed to blame - check that your system is in the correct VPC (default you listed above).
Finally, you haven't specified how you assigned the static IP address but make sure that the address is created and assigned to that instance.

Configuring local laptop as puppet server and aws ec2 instance as puppet agent

I am trying to configure the puppet server and agent making my local laptop with ubuntu 18.04 as puppet server and aws ec2 instance as puppet agent. When trying to do so i am facing the issues related to hostname adding in /etc/hosts file and whether to use the public ip or private ip address and how to do the final configuration and make this work.
I have used the public ip and public dns of both the system to specify in the /etc/hosts file but when trying to run the puppet agent --test from the agent getting the error as temporary failure in name resolution and connecting to https://puppet:8140 failed. I am using this for a project and my setup needs to remain like this.
The connection is initiated from the Puppet agent to the PE server, so the agent is going to be looking for your laptop, even if you have the details of your laptop in the hosts file it probably has no route back to your laptop across the internet as the IP of your laptop was probably provided by your router at home.
Why not build your Puppet master on an ec2 instance and keep it all on the same network, edit code on your laptop, push to github/gitlab and then deploy the code from there to your PE server using code-manager.
Alternatively you may be able to use a VPN to get your laptop onto the AWS VPC directly in which case it'll appear as just another node on the network and everything should work.
The problem here is that the puppet server needs a public IP or an IP in the same network as your ec2 instance to which your puppet agent can connect to. However, there's one solution without using a VPN though it can't be permanent. You can tunnel your local port to the ec2 instance
ssh -i <pemfile-location> -R 8140:localhost:8140 username#ec2_ip -> This tunnels port 8140 on your ec2 instance to port 8140 in your localhost.
Then inside your ec2 instance you can modify your /etc/hosts file to add this:
127.0.0.1 puppet
Now run the puppet agent on your ec2 instance and everything should work as expected. Also note that if you close the ssh connection created above then the ssh tunnel will stop working.
If you want to keep the ssh tunnel open a bit more reliably then this answer might be helpful: https://superuser.com/questions/37738/how-to-reliably-keep-an-ssh-tunnel-open

Cannot connect to EC2 - ssh: connect to host port 22: Connection refused

I am currently overseas and I am trying to connect to my EC2 instance through ssh but I am getting the error ssh: connect to host ec2-34-207-64-42.compute-1.amazonaws.com port 22: Connection refused
I turned on my vpn to New York but still nothing changes. What reasons could there be for not being able to connect to this instance?
The instance is still running and serving the website but I am not able to connect through ssh. Is this a problem with the wifi where I am staying or with the instance itself?
My debugging steps to EC2 connection time out
Double check the security group access for port 22
Make sure you have your current IP on there and update to be sure it hasn't changed
Make sure the key pair you're attempting to use corresponds to the one attached to your EC2
Make sure your key pair on your local machine is chmod'ed correctly. I believe it's chmod 600 keypair.pem check this
Make sure you're in either your .ssh folder on your host OR correctly referencing it: HOME/.ssh/key.pem
Last weird totally wishy washy checks:
reboot instance
assign elastic IP and access that
switch from using the IP to Public DNS
add a : at the end of user#ip:
Totally mystical debugging sets for 6 though. That's part of the "my code doesn't work - don't know why. My code does work - don't know why." Category
Note:
If you access your EC2 while you are connected to a VPN, do know that your IP changes! So enable incoming traffic from your VPN's IP on your EC2 security group.
In AWS, navigate to Services > EC2.
Under Resources, select Running Instances.
Highlight your instance and click Connect.
In Terminal, cd into the directory containing your key and copy the command in step 3 under "To access your instance."
In Terminal, run: ssh -vvv -i [MyEC2Key].pem ec2-user#xx.xx.xx.xx(xx.xx.xx.xx = your EC2 Public IP) OR run the command in the example under step 4.
Just check if your public ip that you get when you are on VPN is configured as a source address in the SG inbound entry that opens up port 22.
You can check your ip using https://www.google.co.in/search?q=whats+my+ip, when connected to your VPN.
I tried everything in this and several other answers, also in some aws youtube videos. Lost perhaps five hours over a few sessions trying to solve it and now finally..
I was getting the exact same error message as the OP. I even rented another EC2 instance in a nearer data centre for twenty minutes to see if that was it.
Then I thought it might be the router or internet provider in the guest house where I am staying. Had already noticed that some non-mainstream news sites had been blocked - and that was it!
You can check if the router is blocking port 22:
https://superuser.com/questions/1336054/how-to-detect-if-a-network-is-blocking-outgoing-ports
cardamom#neptune $ time nmap -p 22 portquiz.net
Starting Nmap 7.70 ( https://nmap.org ) at 2021-02-03 20:43 CET
Nmap scan report for portquiz.net (27.39.379.385)
Host is up (0.028s latency).
rDNS record for 27.39.379.385: ec2-27-39-379-385.eu-west-3.compute.amazonaws.com
PORT STATE SERVICE
22/tcp closed ssh
Nmap done: 1 IP address (1 host up) scanned in 0.19 seconds
real 0m0,212s
user 0m0,034s
sys 0m0,017s
Then, the question of why someone would want to block the ssh port 22 is addressed in at length here:
https://serverfault.com/questions/25545/why-block-port-22-outbound
Had the same problem after creating some instances on a new VPC. (If internet SSH worked before this solution may not work for you)
When creating a new VPC, make sure you create an internet gateway (VPC -> Internet Gateways)
And also make sure that your VPC's routing table (VPC -> Route Tables) has an entry which redirects all IPs (or just your IP) to the internet gateway you just created.
For me, it was because of this:
NOT ec2-user#xx.xx.xx.xx
BUT THIS =>>> ubuntu#xx.xx.xx.xx
Watch the image of EC2 instance!
Instead of
ssh -i "key.pem" ubuntu#ec2-161-smth.com
use
ssh -i "key.pem" ec2-user#ec2-161-smth.com

How to use a second Elastic Network Interface on the same subnet

When I spin up an Amazon EC2 CentOS 7 server in, say, availability zone us-east-1a, the server is automatically assigned a primary private IP address on eth0, such as 172.31.8.244/20 and a gateway of 172.31.0.1. If I then attach a second interface on eth1, I can specify the address, which needs to be within the 172.31.0.0/20 subnet (or one will be assigned to me automatically within that subnet). Eth1 will have the same gateway as eth0. Let's say I am assigned 172.31.12.121/20. I use the same security group on both eth0 and eth1, which allows SSH only in and everything out.
The problem is that when I try to SSH to eth0 from a different server, it works fine. But when I try to SSH to eth1 I get a timeout. ip addr and ip route show that both interfaces are up and have the correct routes. I can even SSH locally to eth1 and the /var/log/secure log shows the correct entries as when I SSH to eth0 bound to eth1. What do I need to do to be able to SSH to either interface from a different server?
The problem is asymmetric routing. A request to eth1 comes in eth1 and goes out eth0. The reply coming out on eth0 has a different IP address than in the request, and so it is dropped on the client side. The solution is to set up rules that allow responses to route through eth1.
First, make sure you have created an AMI of your server, because if you enter the wrong thing in following steps, you may lose all connectivity to the server and be unable to do anything but reboot it from the Amazon console web page.
Start off by setting the default route for each interface in separate tables:
ip route add default via 172.31.0.1 dev eth0 tab 1
ip route add default via 172.31.0.1 dev eth1 tab 2
To check those were properly added use:
ip route show table 1
ip route show table 2
Now you need to add rules that say to use the different tables depending on the source IP address:
ip rule add from 172.31.8.244/32 tab 1
ip rule add from 172.31.12.121/32 tab 2
You can check all of the rules with:
ip rule
You should now be able to connect to either IP address from a client machine. You can also use the bind option of SSH to connect from either interface on this server to a client machine:
ssh centos#client_ip_address -i mykey.pem (uses the default, eth0)
ssh -b 172.31.12.121 centos#client_ip_address -i mykey.pem (uses eth1)
ssh -b 172.31.8.244 centos#client_ip_address -i mykey.pem (uses eth0)
You can use both interfaces to connect to other EC2 servers in the same availability zone and for any interface that has a Public IP assigned to it, you can connect to the outside world or to other EC2 servers in the same VPC, even if they are in different availability zones.
But what if you want to connect to other EC2 servers that are in the same VPC but different availability zones? In other words, servers in the same data center. The problem is that the Private IP address is masked at 20 bits, which confines you to one availability zone. So for datacenter us-east-1 you have:
us-east-1a: 172.31.0.0/20
us-east-1b: 172.31.16.0/20
us-east-1d: 172.31.48.0/20
us-east-1e: 172.31.32.0/20
To connect across availability zones in one VPC and in one datacenter you need a 16-bit mask. ip addr will show:
inet 172.31.12.121/20 brd 172.31.31.255 scope dynamic eth1
If losf -n | egrep 172.31.12.121 shows you that this address is not in use you can add the new mask and delete the old. Note that the broadcast address has to change at the same time the mask changes:
ip addr add 172.31.12.121/16 dev eth1 brd 172.31.255.255
ip addr del 172.31.12.121/20 dev eth1
Now you should be able to connect from an EC2 server in availability zone A to another host in availability zone B, so long as they are in the same VPC, even if they do not have Public IP addresses.
Troubleshooting:
If you are having problems, try resetting both interfaces, which will remove any manual twiddling you have done. First copy /etc/sysconfig/network-scripts/ifcfg-eth0 to /etc/sysconfig/network-scripts/ifcfg-eth1, editing the second file to change the DEVICE from eth0 to eth1. Then add a line to /etc/sysconfig/network which says GATEWAYDEV=eth0. Finally, run /etc/init.d/network restart (no, it should not disconnect you). Then start over with the above commands.

How to launch an amazon ec2 instance inside VPC using Chef?

This is a question primarily about Chef. When looking into controlling nodes inside Amazon VPC with Chef, I run into some difficulties, mainly that a node that does not have an external IP address is not easily reachable by chef.
I went through the basic tutorial for scenario #2 http://docs.amazonwebservices.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html#Case2_Launch_NAT
However, this this times out:
knife ec2 server create -N app-server-1 -f m1.small -i rails-quick-start.pem -r "role[base]" -G WebServerSG -S rails-quick-start -x ubuntu -s subnet-580d7e30 -y -I ami-073ae46e -Z us-east-1d
What am I doing wrong?
In order for knife to be able to talk to the server you may need to set up a VPN. If your VPC is already connected to your local network via a VPN then it should work but if not you might want to run an OpenVPN server or something similar.
You can also set up servers in two other ways:
Create an EC2 instance and let it boot up. Then run knife bootstrap against it.
Create an EC2 instance with the proper user data and have cloud-init set it up (if you are running say ubuntu with includes cloud-init).
The solution was to setup a tunnel and tunnel the ssh on some port of a publicly visible computer to all the other computers in the cloud. So my load balancer serves http traffic on socket 80, is accessible via socket 22, and uses sockets 2222, 2223, 2224, ... to tunnel ssh to non-public cloud instances. On load balancer (or any public instance) run:
ncat --sh-exec "ncat PRIVATE.SUBNET.IP 22" -l 2222 &
for example:
ncat --sh-exec "ncat 10.0.1.1 22" -l 2222 &
There needs to be a way to associate an Elastic IP to the instance in order to get a public IP for easy access and then do all the bootstrapping and SSH activities through the EIP.