Unable to access EC2 instance - amazon-web-services

I am using AWS and i created Auto scaling launch configuration using shell Script:
#!/bin/sh
curl -L https://us-west-2-aws-training.s3.amazonaws.com/awsu-spl/spl03-working-elb/static/bootstrap-elb.sh | sh
After creating this and the load balancer, two instances were created. I then copied the DNS Name and pasted it in browser, but it says:
This site can’t be reached
internal-elb-asg-167368762.us-east-1.elb.amazonaws.com took too long to respond.
Go to http://amazonaws.com/
Search Google for internal elb asg 167368762 east amazonaws
ERR_CONNECTION_TIMED_OUT

EDIT
I followed your steps and it failed.
You have to change this part of the User Data:
#!/bin/sh curl -L https://us-west-2-aws-training.s3.amazonaws.com/awsu-spl/spl03-working-elb/static/bootstrap-elb.sh | sh
With this:
#!/bin/sh
curl -L https://us-west-2-aws-training.s3.amazonaws.com/awsu-spl/spl03-working-elb/static/bootstrap-elb.sh | sh
Edit: As #john-rotenstein mentioned is not necessary to use sudo.
Also, check this:
You have the correct security groups on EC2 and with your ELB.
Check if you are listening to the port 80 in you ELB.
The port 80 must be opened in your EC2 security group to your ELB security group and the port 80 must be opened worldwide (0.0.0.0) in your ELB security group.
Finally, are you sure that you are not using an internal load balancer right?
Hope it helps you.

Related

Accessing Tensorboard on AWS

I'm trying to access Tensorboard on AWS. Here is my setting :
Tensorboard : tensorboard --host 0.0.0.0 --logdir=train :
Starting TensorBoard b'39' on port 6006 (You can navigate to
http://172.31.18.170:6006)
AWS Security groups (in):
HTTPS TCP 443 0.0.0.0/0
Custom_TCP TCP 6006 0.0.0.0/0
However connecting to ec2-blabla.us-west-1.compute.amazonaws.com:6006 I can't see anything, I basically can't connect.
Do you have any idea?
You can use ssh tunneling technique.
In your terminal:
ssh -i /path/to/your/AWS/key/file -NL 6006:localhost:6006 user#host
where:
user and host: your aws ec2 user and instance specific.
-N: don't execute a remote command (just forward ports)
-L: [bind_address:]port:host:hostport
After that, browse to http://localhost:6006/
Run tensorboard in your ec2 terminal (you can custom logdir and port)
tensorboard --logdir=data/model --port=8080
Find your workstations public ip (a.b.c.d) address by visiting http://ip4.me/
Access the security group configuration assigned to your EC2 and add a custom TCP rule to your inbound traffic.
Outbound should be set to allow traffic from tensorboard port. (In this case 8080). Or you just allow all outgoing traffic from your EC2 instance
Protocol Port Range Destination Description
All traffic All All 0.0.0.0/0
Use your public DNS to access tensorboard from your workstation
http://ec2-xx-xxx-xx-xx.compute-1.amazonaws.com:8080/
Fast (but unsecure) solution:
Run:
tensorboard --logdir=/training --host=0.0.0.0 --port=8080
on your AWS instance.
Make sure that both your inbound and outbound rules on AWS console (control center) are as unrestricted as possible (allow all types, all ports etc.). However, keep in mind that this solution is not recommendable for environments requiring security (in our case, we didn't consider security for training an NN).
An attempt to explain why this works: when the policy is set as described, AWS still seems to prohibit inbound/outbound connections on the standard tensorboard port 6006. This does not seem to apply to the port 8080.
Long (but more secure) solution:
See: https://blog.altoros.com/getting-started-with-a-cpu-enabled-tensorflow-instance-on-aws.html
(provides explanations for setting ports correctly on AWS)
I managed to set it up like this:
Go to security groups in your ec2 console:
Choose the relevant security group in the table, click edit.
Add a rule like this:
Start tensorboard: tensorboard --logdir tf_summary/ --port 8080
Find out the URL of your instance and visit http://yourURL:8080
Simply run the tensorboard without the host parameter (which poses restrictions)
tensorboard --logdir XXX --port 6006
I suffered from the same problem for several days.
Fortunately I solved this issue by adding rule on "AWS Outbound rule" as if I had added "AWS Inbound rule".
Regardless of this setting, it works at home.
The same error is still happening only in the company.

Setting up username/password authentication with EC2 for mongodb on port 27017

I currently have an EC2 instance that I am using to host my mongodb sever on from port 27017. Previously I had just setup the security group to just use my home IP address to authenticate a TCP connection to port 27017, however I no longer have a static IP. I now have one that changes everyday that I cannot control. Is there a way to create a mongo URI like mongolabs has
mongodb://<username><passs>#<my EC2 IP>:27017/db
that I can use to connect from PyMongo.
There are many, many guides available by searching that describe how to enable MongoDB authentication.
Alternatively, you could create a small script that uses the AWS CLI to update the security group with your current IP address. The script could be run when needed or set to run automatically your computer starts or you log in.
Install AWS CLI on your machine. You should have proper IAM permissions to update the security group. Then you can use below bash script to update your security group with your current IP address.
#!/bin/bash
ip = 'curl -s http://whatismijnip.nl |cut -d " " -f 5'
sleep 5
aws ec2 authorize-security-group-ingress --group-name MySecurityGroup --protocol tcp --port 22 --cidr $ip/24

Accessing ElasticSearch on EC2 instance from outside the cloud

I am trying to access my ElasticSearch on a running EC2 instance from outside the Cloud. I currently have SSH/HTTP/HTTPS open to the public for inbound traffic as well as all open for outbound traffic. I set up a public IP for my EC2 instance as well.
By default ElasticSearch is on port 9200. I'm not sure if I configured my elasticsearch.yml file correctly but it basically has the default configuration I only changed the cluster.name to something else.
When I type in my public IP with port 9200 into my local browser or locally do a telnet {public-ip} 9200, there is no response. When I SSH into my EC2 instance. I can perform a curl localhost:9200 and I get the correct response from elasticsearch
How can I connect to my ElasticSearch running on my EC2 instance from outside the cloud?
I added a Custom Rule for my security group for inbound traffic that includes port 9200 and is open to 0.0.0.0/0 and I still cannot access this EC2 instance
Potential issues to check are wrong binding and instance operating system firewall.
Check where elasticsearch is binding, as if it is binding to 127.0.0.1 you won't be able to reach it from the outside.
Check binding by running in one shell on the elasticsearch ec2:
sudo netstat -lptun | grep 9200
If it shows 127.0.0.1:9200 then there is a misconfiguration if otherwise shows
*:9200 or :9200 then it is correct.
If it shows 127.0.0.1 then you should modify elasticsearch parameter network.bind_host as described in:https://www.elastic.co/guide/en/elasticsearch/reference/1.4/modules-network.html
Additionally http/HTTPS and ssh are usually allowed by default operating system firewall, whereas elasticsearch 9200 is not. This is usually the case for rhel and centos. You can temporarily disable iptables and check if it works.
To disable iptables run:
sudo iptables -F
If after disabling iptables the connection works you should configure iptables to allow connection on 9200.
I hope this helps.
G.
It is mess around Security Groups
You can add or remove rules for a security group (also referred to as
authorizing or revoking inbound or outbound access).
You shuld use the SG while launching your instance whith bounded 9200
Establish an SSH tunnel from your desktop to EC2.. then simply use your browser.. follow steps as given in https://www.jeremydaly.com/access-aws-vpc-based-elasticsearch-cluster-locally/

Accessing RDS from within a Docker container not getting through security group?

I'm attempting to run a webserver that uses an RDS database with EC2 inside a docker container.
I've setup the security groups so the EC2 host's role is allowed to access the RDS and if I try to access it from the host machine directly everything works correctly.
However, when I run a simple container on the host and attempt to access the RDS, it get's blocked as if the security group weren't letting it through. After a bunch of trial and error it seemed that indeed the containers requests aren't appearing to come from the EC2 host so the firewall says no.
I was able to work around this in the short-run by setting --net=host on the docker container, however this breaks a lot of great docker networking functionality like being able to map ports (ie, now I need to make sure each instance of the container listens on a different port by hand).
Has anyone found a way around this? It seems like a pretty big limitation to running containers in AWS if you're actually using any AWS resources.
Yes, containers do hit the public IPs of RDS. But you do not need to tune low-level Docker options to allow your containers to talk to RDS. The ECS cluster and the RDS instance have to be in the same VPC and then access can be configured through security groups. The easiest way to do this is to:
Navigate to the RDS instances page
Select the DB instance and drill in to see details
Click on the security group id
Navigate over to the Inbound tab and choose Edit
And ensure there is a rule of type MySQL/Aurora with source Custom
When entering the custom source, just start typing in the name of the ECS cluster and the security group name will be auto-completed for you
This tutorial has screenshots that illustrate where to go.
Full disclosure: This tutorial features containers from Bitnami and I work for Bitnami. However the thoughts expressed here are my own and not the opinion of Bitnami.
Figured out what was happening, posting here in case it helps anyone else.
Requests from within the container were hitting the public ip of the RDS rather than the private (which is how the security groups work). It looks like the DNS inside the docker container was using the 8.8.8.8 google dns and that wouldn't do the AWS black magic of turning the rds endpoint into the private ip.
So for instance:
DOCKER_OPTS="--dns 10.0.0.2 -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock -g /mnt/docker"
The inbound rule for the RDS should be set to the private IP of the EC2 instance rather than the public IPv4.
As #adamneilson mentions, setting the Docker options are your best bet. Here is how to discover your Amazon DNS server on the VPC. Also the section Enabling Docker Debug Output in the Amazon EC2 Container Service Developer Guide Troubleshooting mentions where the Docker options file is.
Assuming you are running a VPC block of 10.0.0.0/24 the DNS would be 10.0.0.2.
For CentOS, Red Hat and Amazon:
sed -i -r 's/(^OPTIONS=\")/\1--dns 10.0.0.2 /g' /etc/sysconfig/docker
For Ubuntu and Debian:
sed -i -r 's/(^OPTIONS=\")/\1--dns 10.0.0.2 /g' /etc/default/docker
When I tried to connect to AWS RDS in inside of docker container, I got "Access denied for user 'username'#'xxx.xx.xxx.x' (using password: YES)" error.
To solve this issue, I did below two ways:
I created new user and assigned grant.
$ CREATE USER 'newuser'#'%' IDENTIFIED BY 'password';
$ GRANT ALL ON newuser#'%' IDENTIFIED BY 'password';
$ FLUSH PRIVILEGES;
Added global DNS address 8.8.8.8 into docker container when run docker, so that the docker container can resolve IP address of AWS RDS from domain name.
$ docker run --name backend-app --dns=8.8.8.8 -p 8000:8000 -d backend-app
Then I connected from inside of docker container to AWS RDS, successfully.
Note: Firstly, I tried second way. But I didn't solve the connection problem. When I tried both two ways, I was success.

Can you connect to Amazon ElastiСache Redis outside of Amazon?

I'm able to connect to an ElastiCache Redis instance in a VPC from EC2 instances. But I would like to know if there is a way to connect to an ElastiCache Redis node outside of Amazon EC2 instances, such as from my local dev setup or VPS instances provided by other vendors.
Currently when trying from my local set up:
redis-cli -h my-node-endpoint -p 6379
I only get a timeout after some time.
SSH port forwarding should do the trick. Try running this from you client.
ssh -f -N -L 6379:<your redis node endpoint>:6379 <your EC2 node that you use to connect to redis>
Then from your client
redis-cli -h 127.0.0.1 -p 6379
It works for me.
Please note that default port for redis is 6379 not 6739. And also make sure you allow the security group of the EC2 node that you are using to connect to your redis instance into your Cache security group.
Also, AWS now supports accessing your cluster more info here
Update 2018
The previous answer was accurate when written, however it is now possible with some configuration to access redis cache from outside using the directions according to Accessing ElastiCache Resources from Outside AWS
Old Answer
No, you can't without resorting to 'tricks' such as a tunnel, which maybe OK for testing but will kill any real benefit of using a super-fast cache with the added latency/overhead.
The Old FAQ under How is using Amazon ElastiCache inside a VPC different from using it outside?:
An Amazon ElastiCache Cluster, inside or outside a VPC, is never allowed to be accessed from the Internet
However, this language has been removed in the current faq
These answers are out of date.
You can access elastic-cache outside of AWS by following these steps:
Create a NAT instance in the same VPC as your cache cluster but in a
public subnet.
Create security group rules for the cache cluster and
NAT instance.
Validate the rules.
Add an iptables rule to the NAT
instance.
Confirm that the trusted client is able to connect to the
cluster.
Save the iptables configuration.
For a more detailed description see the aws guide:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html#access-from-outside-aws
Not so old question, I ran to the same issue myself and solved it:
Sometimes, for developing reasons you need to access from outside (to avoid multi-deployments just for a simple bug-fix maybe?)
Amazon have published a new guide that uses the EC2 as proxies for the outside world:
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html#access-from-outside-aws
Good luck!
BTW if anyone wants a windows EC2 solution, try these at the DOS prompt (on said windows EC2 machine):
To Add port-forwarding
C:\Users\Administrator>netsh interface portproxy add v4tov4 listenport=6379 listenaddress=10.xxx.64.xxx connectport=6379 connectaddress=xxx.xxxxxx.ng.0001.use1.cache.amazonaws.com
To list port-forwarded ports
C:\Users\Administrator>netsh interface portproxy show all
Listen on ipv4: Connect to ipv4:
Address Port Address Port
10.xxx.128.xxx 6379 xxx.xxxxx.ng.0001.use1.cache.amazonaws.com 6379
To remove port-forwarding
C:\Users\Administrator>netsh interface portproxy delete v4tov4 listenport=6379 listenaddress=10.xxx.128.xxx
We are using HAProxy as a reserved proxy server.
Your system outside AWS ---> Internet --> HAProxy with public IP --> Amazon Redis (Elasticache)
Notice that there is another good reason to do that (at that time)
As we use node.js client, which don't support Amazon DNS fail over, the client driver don't support dns look up again.
If the redis fail, the client driver will keep connect to the old master, which is slave after failed over.
By using HAProxy, it solved that problem.
Now using the latest ioredis driver, it support amazon dns failover.
This is a solid node script that will do all the dirty work for you. Tested and verified it worked.
https://www.npmjs.com/package/uzys-elasticache-tunnel
How to use
Usage: uzys-elasticache-tunnel [options] [command]
Commands:
start [filename] start tunneling with configuration file (default: config.json)
stop stop tunneling
status show tunneling status
Options:
-h, --help output usage information
-V, --version output the version number
Usage Example
start - uzys-elasticache-tunnel start ./config.json
stop - uzys-elasticache-tunnel stop
status - uzys-elasticache-tunnel status
Its is not possible to directly access the classic-cluster from a VPC instance. The workaround would be configuring NAT on the classic instance.
NAT need to have a simple tcp proxy
YourIP=1.2.3.4
YourPort=80
TargetIP=2.3.4.5
TargetPort=22
iptables -t nat -A PREROUTING --dst $YourIP -p tcp --dport $YourPort -j DNAT \
--to-destination $TargetIP:$TargetPort
iptables -t nat -A POSTROUTING -p tcp --dst $TargetIP --dport $TargetPort -j SNAT \
--to-source $YourIP
iptables -t nat -A OUTPUT --dst $YourIP -p tcp --dport $YourPort -j DNAT \
--to-destination $TargetIP:$TargetPort
I resolved using this amazon docs it says you ll have to install stunnel in your another ec2 machine.
https://aws.amazon.com/premiumsupport/knowledge-center/elasticache-connect-redis-node/