How to launch an amazon ec2 instance inside VPC using Chef? - amazon-web-services

This is a question primarily about Chef. When looking into controlling nodes inside Amazon VPC with Chef, I run into some difficulties, mainly that a node that does not have an external IP address is not easily reachable by chef.
I went through the basic tutorial for scenario #2 http://docs.amazonwebservices.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html#Case2_Launch_NAT
However, this this times out:
knife ec2 server create -N app-server-1 -f m1.small -i rails-quick-start.pem -r "role[base]" -G WebServerSG -S rails-quick-start -x ubuntu -s subnet-580d7e30 -y -I ami-073ae46e -Z us-east-1d
What am I doing wrong?

In order for knife to be able to talk to the server you may need to set up a VPN. If your VPC is already connected to your local network via a VPN then it should work but if not you might want to run an OpenVPN server or something similar.
You can also set up servers in two other ways:
Create an EC2 instance and let it boot up. Then run knife bootstrap against it.
Create an EC2 instance with the proper user data and have cloud-init set it up (if you are running say ubuntu with includes cloud-init).

The solution was to setup a tunnel and tunnel the ssh on some port of a publicly visible computer to all the other computers in the cloud. So my load balancer serves http traffic on socket 80, is accessible via socket 22, and uses sockets 2222, 2223, 2224, ... to tunnel ssh to non-public cloud instances. On load balancer (or any public instance) run:
ncat --sh-exec "ncat PRIVATE.SUBNET.IP 22" -l 2222 &
for example:
ncat --sh-exec "ncat 10.0.1.1 22" -l 2222 &

There needs to be a way to associate an Elastic IP to the instance in order to get a public IP for easy access and then do all the bootstrapping and SSH activities through the EIP.

Related

Configuring local laptop as puppet server and aws ec2 instance as puppet agent

I am trying to configure the puppet server and agent making my local laptop with ubuntu 18.04 as puppet server and aws ec2 instance as puppet agent. When trying to do so i am facing the issues related to hostname adding in /etc/hosts file and whether to use the public ip or private ip address and how to do the final configuration and make this work.
I have used the public ip and public dns of both the system to specify in the /etc/hosts file but when trying to run the puppet agent --test from the agent getting the error as temporary failure in name resolution and connecting to https://puppet:8140 failed. I am using this for a project and my setup needs to remain like this.
The connection is initiated from the Puppet agent to the PE server, so the agent is going to be looking for your laptop, even if you have the details of your laptop in the hosts file it probably has no route back to your laptop across the internet as the IP of your laptop was probably provided by your router at home.
Why not build your Puppet master on an ec2 instance and keep it all on the same network, edit code on your laptop, push to github/gitlab and then deploy the code from there to your PE server using code-manager.
Alternatively you may be able to use a VPN to get your laptop onto the AWS VPC directly in which case it'll appear as just another node on the network and everything should work.
The problem here is that the puppet server needs a public IP or an IP in the same network as your ec2 instance to which your puppet agent can connect to. However, there's one solution without using a VPN though it can't be permanent. You can tunnel your local port to the ec2 instance
ssh -i <pemfile-location> -R 8140:localhost:8140 username#ec2_ip -> This tunnels port 8140 on your ec2 instance to port 8140 in your localhost.
Then inside your ec2 instance you can modify your /etc/hosts file to add this:
127.0.0.1 puppet
Now run the puppet agent on your ec2 instance and everything should work as expected. Also note that if you close the ssh connection created above then the ssh tunnel will stop working.
If you want to keep the ssh tunnel open a bit more reliably then this answer might be helpful: https://superuser.com/questions/37738/how-to-reliably-keep-an-ssh-tunnel-open

SSH Port forwarding / Tunneling with multiple hops

Background
Three subnets exist in an AZ in AWS. Two of them are private and one is public.
The Public Subnet has a Jumpbox which can be connected to from my local machine via ssh using a pem file (Sample - ssh -i my-key-file.pem ec2-user#host1
The First private subnet has an EC2 Instance that acts as a Application Server. It can only be reached from the Jumbox via ssh. The same pem file is used here. (Sample - ssh -i my-key-file.pem ec2-user#host2). This command is executed on host1.
The second private subnet hosts an Oracle Instance using AWS RDS Service. It is running on port 1521. The DB Can only be accessed from the App Server/host2.
How I am working currently
host2 has sqlplus client installed.
First,I connect to host1, then to host2, and then execute sqlplus to execute Queries at the Command line (No GUI).
I am planning to use GUI tools like SQL Developer to connect right from my local machine. I thought using Port forwarding/SSH Tunneling It can be achieved.
I tried using different options, but with no success. The following links are useful:
https://superuser.com/questions/96489/an-ssh-tunnel-via-multiple-hops
https://rufflewind.com/2014-03-02/ssh-port-forwarding
My Approach to SSH Tunneling
ssh -N -L 9999:127.0.0.1:1234 ec2-user#host1 -i my-key-file.pem -v -v -v
This is executed on my local machine.
It does not do much as I can already connect to host1 using ssh. I did not know how to forward many levels. Using this host as my first hop. After this, ssh listens on port 9999 which is Local to my machine. It forwards any traffic to host1 to Port 1234. My assumption is that, If I use sqlplus on my local machine connecting to localhost:9999, the traffic will arrive at host1:1234
I used 127.0.0.1 because the target of SSH tunneling is with respect to the SSH Server, which is host1. Basically, Both Target, SSH Server are on the same host.
ssh -N -L 1234:db-host:1521 ec2-user#host2 -i my-key-file.pem -v -v- v
This is executed on host1
After this, ssh forwards any incoming traffic on port 1234 to target host (DB Host)/1521 using host2 as the Tunnel.
Again, my assumption is that, ssh is listening on port 1234 on host1. Any traffic arriving from anywhere will be delivered to DB Host using host2 as the tunnel.
I executed both commands and did not see any error. I verified which ports are listening using netstat -tulpn | grep LISTEN.
After these two, My plan was to connect to the Database using Hostname as localhost and Port number as 9999.
What's going wrong !
But when I try to connect to the DB from my local machine, getting an error from my SQL Client Got minus one from a read call. I could not understand the Debug messages from ssh logs.
I believe my understanding of how port forwarding works might not be right. Any inputs would be helpful.
Thanks for your time !

Access bosh-lite from remote machine

I install a bosh-lite on my physical machine1, and then I want to use boshcli to access the bosh director on another machine2. machine2 and machine1 can connected.
I tried to use iptables snat ,but not work. How to do it?
Make sure you can first connect from the same local machine using the CLI. If that works, make sure you can then connect to the machine1 IP from machine2 (not bosh-lite, but host). If that works you need to add the route on machine 2 to tell it to send traffic for bosh-lite vm IPs to machine 1 ips.
sudo route add -net 192.168.100.0/24 192.168.50.4
where the first range is network range defined in bosh manifest, and 2nd argument is the gateway, likely machine 1 IP.

How to connect AWS EC2 private IP with filezilla

I am currently working on an AWS EC# LINUX AMI. I have a private IP. Is it possible to access that private IP with filezilla to transfer files. i am unable to do so.
For access an EC2 machine with private IP, you need to setup your own VPN server. If you already have VPN setup in your AWS cloud then you just need to install a VPN client and login with your credential and you will be able to access EC2 machine or transfer files using filezilla with private IP too. I am assuming that you haven't setup VPN server. you may use AMI of OPENVPN from AWS market place for setup VPN. Below is the good link for getting start.
https://docs.openvpn.net/how-to-tutorialsguides/virtual-platforms/amazon-ec2-appliance-ami-quick-start-guide/
After complete this you have to install OPENVPN in your machine and after Login with your credentials your will able to access your EC2 instance with private IP.
Below is the link for install OPENVPN in Ubuntu machine. For different operating system you can explore site.
https://docs.openvpn.net/getting-started/how-to-install-openvpn-as-software/
OPENVPN is one of the alternative, you can use other also as per your need.
Using 2 ways you can do this
Create a bastion host which will connect to the private instance
Using a port forwarding means tunnelling.
If you are using bastion host for connecting private ec2 instance then this steps will be useful
Using Filezilla to transfer files to a private ec2 instance through a bastion host:-
Note: Keep Pem file same of bastion host and private ec2 instance.
Open terminal or cmd(linux terminal i.e gitbash)
we are connecting to the AWS EC2 instance with one terminal command.
ssh -N -L 1234:<private_instance_ip or Private_DNS>:22 -i <Pem_File> #<Bastion_host_public_ip>
e.g.
ssh -N -L 1234: ip-171-12-21-208.us-east-1.compute.internal:22 -i app_prod.pem ubuntu#ec2-31-92-123-22.us-east-1.compute.amazonaws.com
Note: - For the first time when you enter this command it will ask for Are you sure you want to continue connecting - yes
3.Keep this terminal or cmd open.
If you close this session then the connection is broken
4.Open “FileZilla” application and on “Edit” section -> Click on “Settings”
5.On “Settings” page -> Click on “SFTP” and add PEM file of ec2 instance and click on “OK”
6.Add below entries:-
Host:- 127.0.0.1 or sftp://127.0.0.1
Username:- <your_user>
Password:- Keep empty
Port:- 1234
7.Click on Quick Connect.
Once the connection is established then you can easily transfer files from local to private instance.
See- scp-to-transfer-files-to-a-private-ec2-instance-through-a-bastion-host
https://www.davidbegin.com/using-scp-to-transfer-files-to-a-private-ec2-instance-through-a-bastion-host/

Can you connect to Amazon ElastiСache Redis outside of Amazon?

I'm able to connect to an ElastiCache Redis instance in a VPC from EC2 instances. But I would like to know if there is a way to connect to an ElastiCache Redis node outside of Amazon EC2 instances, such as from my local dev setup or VPS instances provided by other vendors.
Currently when trying from my local set up:
redis-cli -h my-node-endpoint -p 6379
I only get a timeout after some time.
SSH port forwarding should do the trick. Try running this from you client.
ssh -f -N -L 6379:<your redis node endpoint>:6379 <your EC2 node that you use to connect to redis>
Then from your client
redis-cli -h 127.0.0.1 -p 6379
It works for me.
Please note that default port for redis is 6379 not 6739. And also make sure you allow the security group of the EC2 node that you are using to connect to your redis instance into your Cache security group.
Also, AWS now supports accessing your cluster more info here
Update 2018
The previous answer was accurate when written, however it is now possible with some configuration to access redis cache from outside using the directions according to Accessing ElastiCache Resources from Outside AWS
Old Answer
No, you can't without resorting to 'tricks' such as a tunnel, which maybe OK for testing but will kill any real benefit of using a super-fast cache with the added latency/overhead.
The Old FAQ under How is using Amazon ElastiCache inside a VPC different from using it outside?:
An Amazon ElastiCache Cluster, inside or outside a VPC, is never allowed to be accessed from the Internet
However, this language has been removed in the current faq
These answers are out of date.
You can access elastic-cache outside of AWS by following these steps:
Create a NAT instance in the same VPC as your cache cluster but in a
public subnet.
Create security group rules for the cache cluster and
NAT instance.
Validate the rules.
Add an iptables rule to the NAT
instance.
Confirm that the trusted client is able to connect to the
cluster.
Save the iptables configuration.
For a more detailed description see the aws guide:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html#access-from-outside-aws
Not so old question, I ran to the same issue myself and solved it:
Sometimes, for developing reasons you need to access from outside (to avoid multi-deployments just for a simple bug-fix maybe?)
Amazon have published a new guide that uses the EC2 as proxies for the outside world:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html#access-from-outside-aws
Good luck!
BTW if anyone wants a windows EC2 solution, try these at the DOS prompt (on said windows EC2 machine):
To Add port-forwarding
C:\Users\Administrator>netsh interface portproxy add v4tov4 listenport=6379 listenaddress=10.xxx.64.xxx connectport=6379 connectaddress=xxx.xxxxxx.ng.0001.use1.cache.amazonaws.com
To list port-forwarded ports
C:\Users\Administrator>netsh interface portproxy show all
Listen on ipv4: Connect to ipv4:
Address Port Address Port
10.xxx.128.xxx 6379 xxx.xxxxx.ng.0001.use1.cache.amazonaws.com 6379
To remove port-forwarding
C:\Users\Administrator>netsh interface portproxy delete v4tov4 listenport=6379 listenaddress=10.xxx.128.xxx
We are using HAProxy as a reserved proxy server.
Your system outside AWS ---> Internet --> HAProxy with public IP --> Amazon Redis (Elasticache)
Notice that there is another good reason to do that (at that time)
As we use node.js client, which don't support Amazon DNS fail over, the client driver don't support dns look up again.
If the redis fail, the client driver will keep connect to the old master, which is slave after failed over.
By using HAProxy, it solved that problem.
Now using the latest ioredis driver, it support amazon dns failover.
This is a solid node script that will do all the dirty work for you. Tested and verified it worked.
https://www.npmjs.com/package/uzys-elasticache-tunnel
How to use
Usage: uzys-elasticache-tunnel [options] [command]
Commands:
start [filename] start tunneling with configuration file (default: config.json)
stop stop tunneling
status show tunneling status
Options:
-h, --help output usage information
-V, --version output the version number
Usage Example
start - uzys-elasticache-tunnel start ./config.json
stop - uzys-elasticache-tunnel stop
status - uzys-elasticache-tunnel status
Its is not possible to directly access the classic-cluster from a VPC instance. The workaround would be configuring NAT on the classic instance.
NAT need to have a simple tcp proxy
YourIP=1.2.3.4
YourPort=80
TargetIP=2.3.4.5
TargetPort=22
iptables -t nat -A PREROUTING --dst $YourIP -p tcp --dport $YourPort -j DNAT \
--to-destination $TargetIP:$TargetPort
iptables -t nat -A POSTROUTING -p tcp --dst $TargetIP --dport $TargetPort -j SNAT \
--to-source $YourIP
iptables -t nat -A OUTPUT --dst $YourIP -p tcp --dport $YourPort -j DNAT \
--to-destination $TargetIP:$TargetPort
I resolved using this amazon docs it says you ll have to install stunnel in your another ec2 machine.
https://aws.amazon.com/premiumsupport/knowledge-center/elasticache-connect-redis-node/