I have a dockerized application in EC2 , which is running fine
And I have a security policy like following
Here my instance's details
If I hit https://54.167.118.150/ or http://54.167.118.150/ or https://54.167.118.150:8080 or http://54.167.118.150:8080
It shows connection refused.
But when I hit the IP in browser , it was saying refused to connect .
Check your Dockerfile is port 8080 is exposed or not. The port 8080 should be exposed to the host, add below line in the bottom of Dockerfile;
EXPOSE 8080
Related
My EC2 instance has the following security rules:
Unfortunately, if I browse its public IP address via HTTPS, I get "Unable to reach the site", while if I browse it via HTTP it works as it should.
SOLVED - I had to set Apache to listen on port 443.
I have a web application running on a private server. I use ssh port tunnelling to map the private server port's to that of google cloud VM port 8080, and when I do
curl http://localhost:8080
on gcp VM shell, it returns a valid response. However, when I try to access it from outside (in browser) using the external IP (or do curl http://[external_IP]:8080 in shell), it returns "the IP refused to connect".
My firewall settings allow tcp traffic on 8080 s.t. when I run another application on port 8080 directly in VM without ssh (say a docker hello-world app) it is accessible from outside using the same link and works well. Is there additional configuration i must do?
Check if your application is binding to 127.0.0.1 or localhost. If yes, change to 0.0.0.0.
To accept traffic from the VPC requires binding to a network interface connected to the VPC. The address 0.0.0.0 means bind to all network interfaces.
The network 127.x.x.x aka localhost or loopback address is an internal-only (Class A) network. If your application only binds to the internal network, external applications cannot connect to your application.
If instead your goal is to bind to localhost and use SSH port forwarding to access the loopback address, then start SSH like this:
ssh -L 8080:127.0.0.1:8080 IP_ADDRESS_OF_VM
You can then access port 8080 on the VM this way:
curl http://127.0.0.1:8080
The curl command is connecting to port 8080 on your local machine. SSH then forwards that connect to port 8080 on the remote machine.
I have a database on a remote Google Cloud (GCP) machine. On GCP, I edited the firewall rules to allow access from my desktop and from an AWS EC2 instance. However, the following happens:
From desktop:
netcat -zv 35.198.56.213 27017
Connection to 35.198.56.213 27017 port [tcp/*] succeeded!
From EC2:
netcat -zv 35.198.56.213 27017
netcat: connect to 35.198.56.213 port 27017 (tcp) failed: Connection timed out
I don't understand why I can connect from my desktop but not from the EC2. Both IPs are allowed (using the instance public address). The outbound rules for the EC2 instance are allowing all traffic.
Any tips?
Edit: I am trying to connect to a mongo instance that is running on port 27017. The bindIp on /etc/mongod.conf is correctly set to 0.0.0.0.
I am very new to coding so trying to figure this out was very hard for me. I'm trying to deploy my code with docker and running my code inside the EC2 cloud. But I can't seem to get the instance's url to work. I set my inbound (security group) HTTP (80) => 0.0.0.0/0, HTTPs (443) => 0.0.0.0/0, and SSH(22) => my ip. I read that setting my SSH to 0.0.0.0/0 was a bad idea, so I went with my ip (there was an option called 'my ip'). Also, I am using ubuntu for my AMI.
While successfully docker using (docker-compose up), I used curl http://localhost:3001 (3001 is my exposed port inside my code) and it works fine. But when I used curl ec2-XX-XXX-XXX-XXX.us-west-1.compute.amazonaws.com, it outputs:
curl: (6) Could not resolve host: ssh and
curl: (7) Failed to connect to ec2-XX-XXX-XXX-XXX.us-west-1.compute.amazonaws.com port 80: Connection refused
Curl ec2-xxx-xx-amazonaws.com send request on port 80 , while you are docker is running at port 3001.
First verify that you have exposed some host port to docker. Something like this should come in docker ps -a
0.0.0.0/3001--> 3001 . the first 3001 can be any host port
Next make sure that the first port whichever you used is there in security group and opened for your ip.
Hopefully if all good at vpc and route tables settings then :3001(use whatever host port you gave if used anything apart of 3001) all should work
I have an EC2 instance on Amazon (AWS). The instance is behind a ELB (Elastic Load Balancer). I want to allow HTTPS connections to reach the EC2 instance.
Is it necessary to have the load balancer configured for HTTPS, ie, to check the certificates etc, or can this just be done traditionally within the EC2 instance and virtual host SSL configuration ?
The reason I'm asking is because I have allowed traffic via ELB -> EC2 for port 80 and 443, but only port 80 reaches the instance.
EDIT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00021s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
443/tcp open https
3306/tcp open mysql
EDIT 2
Here is my other stack overflow questions explaining the bigger problem I have, hence why I opened this question. HTTPS only works on localhost
Check whether any application is running on port 443.
Use this command to check:
nmap -sT -O localhost
EDIT
Add the certificate files on the server and then upload them to IAM using the command:
aws iam upload-server-certificate --server-certificate-name my-server-cert
--certificate-body file://my-certificate.pem --private-key file://my-private-key.pem
--certificate-chain file://my-certificate-chain.pem
For more info check this:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/ssl-server-cert.html