Docker - Cant access docker port from outside - amazon-web-services

So i created a new EC2 Instance and installed docker on it.
I deployed code from ( https://github.com/commonsearch/cosr-front/blob/master/INSTALL.md ) and followed install instructions.
Install was successfull and i started the server:
[ec2-user#ip-172-30-0-127 cosr-front]$ make docker_devserver
docker run -e DOCKER_HOST --rm -v "/home/ec2-user/cosr-front:/go/src/github.com/commonsearch/cosr-front:rw" -w /go/src/github.com/commonsearch/cosr-front -p 9700:9700 -i -t commonsearch/local-front make devserver
mkdir -p build
go build -o build/cosr-front.bin ./server
GODEBUG=gctrace=1 COSR_DEBUG=1 ./build/cosr-front.bin
2016/05/28 16:32:38 Using Docker host IP: 172.17.0.1
2016/05/28 16:32:38 Server listening on 127.0.0.1:9700 - You should open http://127.0.0.1:9700 in your browser!
Well, now when i want to access it from outside, i cant! Not even curl the local server.
When i run docker ps it gives me correct port forwarding:
[ec2-user#ip-172-30-0-127 ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1a9f77e1eeb1 commonsearch/local-front "make devserver" 4 minutes ago Up 4 minutes 0.0.0.0:9700->9700/tcp stoic_hopper
9ff00fe3e70d commonsearch/local-elasticsearch-devindex "/docker-entrypoint.s" 4 minutes ago Up 4 minutes 0.0.0.0:39200->9200/tcp, 0.0.0.0:39300->9300/tcp kickass_wilson
These are my docker images:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 3e205118cd3f 17 minutes ago 853.3 MB
<none> <none> 1d233da1fa59 2 hours ago 955.7 MB
debian jessie ce58426c830c 4 days ago 125.1 MB
commonsearch/local-front latest 30de7ab48d43 7 weeks ago 1.024 GB
commonsearch/local-elasticsearch-devindex latest b1156ada5a24 11 weeks ago 383.2 MB
commonsearch/local-elasticsearch latest 808e72f49b4a 3 months ago 355.2 MB
I have tryed disabling ipv6 and all kind of nonsense google offered me, but without success.
Any ideas ?
EDIT:
Also, if i enter the docker container for frontend( using docker exec ), then I CAN PING AND CULR the frontend.
But i cant from the outside( nor ssh, not from my home pc using browser ).
Also my docker version:
Client:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5/1.9.1
Built:
OS/Arch: linux/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.2
Git commit: a34a1d5/1.9.1
Built:
OS/Arch: linux/amd64

I made a issue at github as swell and one guy saved the day.
Here's he's response:
Server listening on 127.0.0.1:9700
Your application is listening on localhost. localhost is scoped to the container itself. Thus to be able to connect to it, you would have to be inside the container.
To fix, you need to get your application to listen on 0.0.0.0 instead.

127.0.0.1 is the loopback address for the local (EC2) instance. I just recreated your problem following the same instructions and setting up the server in a docker container on an EC2 instance.
If you open another ssh session to your EC2 instance you CAN curl the loopback address, which just spits out the HTML shown below.
<!DOCTYPE html><html lang="en"><head><title>
Common Search
</title><meta content="/apple-touch-icon-precomposed.png" itemprop="image"><link href="/favicon.ico" rel="shortcut icon"><!-- CSS: This will be replaced in templates.go:preprocessTemplate() by the inline, compiled CSS
if the file build/static/css/index.css exists --><link rel="stylesheet" href="/css/global.css"/><link rel="stylesheet" href="/css/header.css"/><link rel="stylesheet" href="/css/footer.css"/><link rel="stylesheet" href="/css/hits.css"/><link rel="stylesheet" href="/css/responsive.css"/><!-- ENDCSS --><meta name="viewport" content="width=device-width, initial-scale=1"></head><body class="full"><header id="h"><div class="about">About</div><form id="f" action="/" method="GET" data-init="{"q":"","p":1,"g":""}">Common Search<div id="w"><div id="qw"><input id="q" name="q" type="text" size="60" value="" autofocus tabindex="3"/></div><span id="g"><select name="g" tabindex="4"><option value="ar">AR</option><option value="de">DE</option><option selected value="en">EN</option><option value="es">ES</option><option value="fr">FR</option><option value="it">IT</option><option value="ja">JA</option><option value="ko">KO</option><option value="nl">NL</option><option value="pl">PL</option><option value="pt">PT</option><option value="ru">RU</option><option value="vi">VI</option><option value="zh">ZH</option><option value="all">ALL</option></select></span><input id="s" type="submit" value="šŸ”" tabindex="5"/></div></form></header><div id="hits"></div><div id="dbg"></div><div id="pager" data-page="1"></div><script src="/js/index.js" type="text/javascript"></script></body></html>
However I doubt this is what you actually want..
If you want to be able to access the hosted server from your (or any other) computer you need to edit the security group for your EC2 instance.
From the nav bar on the left side of the AWS console, select Network & Security -> Security Groups. Select the security group that applies to your current EC2 instance (assuming you made it with the launch wizard, it will have a name like: 'launch-wizard-1 created 2016-05-28T12:57:23.487-04:00'). In the lower half of the console, select the Inbound tab. Edit a new rule to allow TCP on port 9700 from any (or a specific range of) IP(s). The resulting entry should look something like this:
My TCP rule is set up to allow inbound traffic from ANY IP address on that port, you may want to configure it differently for security purposes.
Once the rule is set up, you should be able to access the web server at the public IP of your EC2 instance (which can be found on the Instances page of the AWS console). The address you should access should be :9700
Hope this helps!

Related

Why is Google Compute Engine not running my container?

I can do this successfully:
Bundle my app into a docker image
Build this image into a container using Google Cloud Build upon push to master
(This container is stored in the registry at, for example, gcr.io/my-project/my-container)
Deply this container to the web using Google Cloud Run
Visit the Cloud Run url and see my website
I am now trying more sophisticated builds and I think the next step is to use Google Compute Engine.
To start, I am simply trying to deploy a single instance of the same app that I deployed to Cloud Run:
Navigate to Compute Engine > VM Instances
Enter basics like instance name
Enter my container location under "Container Image": gcr.io/my-project/my-container
(As an aside, I find it suspect that the interface does not offer a selector for your existing Container Registry items here.)
Select "Allow HTTP Traffic" and "Allow HTTPS Traffic"
Click "Create"
GCE takes a minute to create it, and then it shows the green checkmark and the instance name, and "External IP: 35.238.xxx.xxx". I visit that URL in my browser and get... "35.238.xxx.xxx refused to connect."
To inspect, I go back to the GCE page and select "SSH > Open in browser window" next to my instance, which opens a type of cloud terminal to the machine.
In this terminal window, type ps and see that no processes are running. The container Dockerfile ends with CMD yarn start:prod, so I guess that's not happening here.
Further, I ls here and there and navigate around, and see that there is no /app directory from my Dockerfile's WORKDIR /app command. It seems like not only did my app not boot, but was the container not copied to the VM instance?
What am I doing wrong?
For anyone having this issue. I faced the same problem and couldn't figure it out.
Reading Serhii's answer give me the clue. I believe as of today (Jan 2021) the GCP Console UI is a bit unhelpful. It appears that if you type in a container name when creating your VM but WITHOUT specifying a tag on the end, it doesn't complain nor assume a default such as 'latest', it just fails silently. Hence the VM but with no docker container running.
At least it this now works for me, hopefully this helps others.
Check whether your VM has an external IP address.
If it doesn't, the VM might not have network access to the public repository and even to the Google Container Registry (gcr.io) and the docker container doesn't start silently.
I've decided to follow Deploying a container on a new VM instance again.
Please find my steps and commands below:
create a new VM that runs the Docker image gcr.io/cloud-marketplace/google/nginx1:latest with network tag http-server:
$ gcloud compute instances create-with-container instance-3 --tags=http-server,https-server --container-image=gcr.io/cloud-marketplace/google/nginx1:latest
Created [https://www.googleapis.com/compute/v1/projects/test-prj/zones/europe-west3-a/instances/instance-3].
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
instance-3 europe-west3-a n1-standard-1 10.156.0.30 35.XXX.111.XXX RUNNING
create a new firewall rule:
$ gcloud compute firewall-rules create default-allow-http --direction=INGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:80 --source-ranges=0.0.0.0/0 --target-tags=http-server
Creating firewall...ā ¹
Created [https://www.googleapis.com/compute/v1/projects/test-prj/global/firewalls/default-allow-http].
Creating firewall...done.
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
default-allow-http default INGRESS 1000 tcp:80 False
check current firewall rules:
$ nmap -Pn 35.XXX.111.XXX
Starting Nmap 7.70 ( https://nmap.org ) at 2020-04-02 12:04 CEST
PORT STATE SERVICE
...
80/tcp open http
check if NGINX is running in the container:
$ curl -I http://35.XXX.111.XXX
HTTP/1.1 200 OK
Server: nginx/1.16.1
...
$ curl http://35.XXX.111.XXX
...
<h1>Welcome to nginx!</h1>
...
also via web browser at http://35.XXX.111.XXX
check status of the container:
$ gcloud compute ssh instance-3
...
instance-3 ~ $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
...
a657c8871239 gcr.io/cloud-marketplace/google/nginx1:latest "/usr/local/bin/dockā€¦" 14 minutes ago Up 14 minutes klt-instance-3-uwtu
attach to the container and run curl http://35.XXX.111.XXX in the separate terminal:
instance-3 ~ $ docker attach a657c8871239
YY.YY.43.203 - - [02/Apr/2020:10:18:06 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-"
YY.YY.43.203 - - [02/Apr/2020:10:18:07 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.64.0" "-"
I found no errors while following documentation.
To solve your issue:
Compare your steps and commands to mine.
Run test Docker image by following documentation on your project.
Try to replicate steps from documentation with your custom image.
If you still have issue - update your question with all your steps, commands and outputs.
I also had the problem, the instance was running, but could not pull my container.
Error: Failed to start container: Error response from daemon:
{"message":"unautho rized: You don't have the needed permissions to
perform this operation, and you may have invalid credentials. To
authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication"
I had to add some extra scope to the yaml file : https://www.googleapis.com/auth/source.full_control
steps:
- name: gcr.io/cloud-builders/docker
args: ['build', '-t', 'gcr.io/local-xxxxxxxxxxxxxx/apptraining', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ["push", "gcr.io/local-xxxxxxxxxxxxxx/apptraining"]
- name: 'gcr.io/cloud-builders/gcloud'
args: ['compute', 'instances', 'create-with-container', 'instanceapptraining', '--machine-type=n1-standard-1', '--scopes=https://www.googleapis.com/auth/devstorage.full_control,https://www.googleapis.com/auth/trace.append,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/userinfo.email,https://www.googleapis.com/auth/bigquery,https://www.googleapis.com/auth/datastore,https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/trace.append,https://www.googleapis.com/auth/source.full_control,https://www.googleapis.com/auth/source.read_only,https://www.googleapis.com/auth/compute.readonly','--zone=us-central1-a', '--preemptible', '--container-image=gcr.io/local-xxxxxxxxxxxxxx/apptraining:latest']

Cannot reach react application via dns hosted on ec2

I just want to see my development working on an EC2, showing to some friends, and think in deploying it after all of the work is done, but react doesn't cooperate. :/
I did everything I always do.
Started a ubuntu server on EC2
applied a group with 3000/tcp opened in my instance
Installed all dependencies of my app, npm 11.1 and its packages via npm install.
npm started it
and...
Nope.. there is no "and"... just my tears over a bunch of attempts without reaching 3000/tcp via public ip and dns..
I even tested ping on it.. set ICMP echo request and response rules, tested and it worked, but when I try to reach the application by 3000/tcp port, nothing.
Does someone have any idea?
As an image talk more than a thousand words, there it is... My nighmare
PS: a curl on localhost:3000 inside the ec2 works just fine.. while
another curl outside the ec2 returns Connection Refused
Looks like the application is bound to localhost (127.0.0.1). Update your start property to include --host 0.0.0.0
Refer: https://github.com/webpack/webpack-dev-server/issues/147

Deploy Django Application on Cloud, but even cannot get access to ip_address:8000

I am a beginner on Django. And try to deploy a testproject on cloud server to check if it works or not.
Server: Ubuntu 16.04
And after I create virtualenv on the server with nginx installed.
I execute below:
python manage.py runserver 0.0.0.0:8000
And then I go to the browser to access my server's http://ip-address:8000.
But it failed to show anything of my application.
I already added the ip-address to ALLOWED_HOST. But still not working.
Are there any thoughts for this situation?
maybe this method work for you because it worked for mine on the ec2 instance on Amazon web server.
step1: go to your dashboard find launch-wizard menu and open it.
step2: Now click on inbound after that click on edit. Here click on Add rule and select Custom TCP rule & enter 8000 in port range. Now again click on Add rule and select HTTP and similarly do for HTTPS then click on save button.
step3: save and restart your AWS machine. Hopefully, your Django app runs on 8000 port.
For a Linux instance in the security group, follow these steps to verify the security group rule:
Connect to a Linux instance by using a password.
Run the following command to check whether TCP 80 is being listened. netstat -an | grep 80
If the following result returns, web service for TCP port 80 is enabled.
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN
If not then there is a problem with your security group setup, please go through the official documentation and see how you can fix it:
https://www.alibabacloud.com/help/doc-detail/25471.htm

Setting up JMeter for Distributed testing in AWS with connectivity issues

I have to do distributed testing using JMeter. The objective is to have multiple remote servers in AWS controlled by one local server send a file download request to another server in AWS.
How can I set up the different servers in AWS?
How can I connect to them remotely?
Can someone provide some step by step instructions on how to do it?
I have tried several things but keep running into connectivity issues across networks.
We had a similar task and we ran into a bunch of issues as well. Here are the details of the whole process and what we did to resolve the issues we encountered. Hope it helps.
We needed to send requests from 5 servers located in various regions of the world. So we launched 5 micro instances in AWS, each in a different region. We chose the regions to be as geographically apart as possible.
Remote (server) JMeters config
Here is how we set up each instance.
Installed java:
$ sudo apt-get update
$ sudo apt-get install default-jre
Installed JMeter:
$ mkdir jmeter
$ cd jmeter;
$ wget ftp://apache.mirrors.pair.com//jmeter/binaries/apache-jmeter-2.9.tgz
$ gunzip apache-jmeter-2.9.tgz;tar xvf apache-jmeter-2.9.tar
Edited the jmeter.properties file in the /bin folder of the JMeter installation and uncomment the line containing the server.rmi.localport setting. We changed the port to 50000.
server.rmi.localport=50000
Started JMeter server. Make sure the address and the port the server reports listening to are correct.
$ cd ~/jmeter/apache-jmeter-2.9/bin
$ vi jmeter-server
Local (client) JMeter config
Then we set up JMeter to run tests remotely on these instances on our local client machine:
Ensured to use the same version of JMeter as was running on the servers. Installed Java and JMeter as described above.
Enabled remote testing by editing the jmeter.properties file that can be found in the bin folder of the JMeter installation. The parameter remote_hosts needed to be set with the public DNS of the remote servers we were connecting to.
remote_hosts=54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x
We were now able to tell our client JMeter instance to run tests on any or all of our specified remote servers.
Issues and resolutions
Here are the issues we encountered and how we resolved them:
The client failed with:
ERROR - jmeter.engine.ClientJMeterEngine: java.rmi.ConnectException: Connection - refused to host: 127.0.0.1
It was due to the server host returning the private IP address as its address because of Amazon NAT.
We fixed this by setting the parameter RMI_HOST_DEF that the /usr/local/jmeter/bin/jmeter-server script includes in starting the server:
RMI_HOST_DEF=-Djava.rmi.server.hostname=54.xx.xx.xx
Now, the AWS instance returned the serverā€™s external IP, and we could start the test.
When the server node attempted to return the result and tried to connect to the client, the server tried to connect to the external IP address of my local machine. But it threw a connection refused error:
2013/05/16 12:23:37 ERROR - jmeter.samplers.RemoteListenerWrapper: testStarted(host) java.rmi.ConnectException: Connection refused to host: xxx.xxx.xxx.xx;
We resolved this issue by setting up reverse tunnels at the client side.
First, we edited the jmeter.properties file in the /bin folder of the JMeter installation and uncommented the line containing the client.rmi.localport setting. We changed the port to 60000:
client.rmi.localport=60000
Then we connected to each of the servers using SSH, and setup a reverse tunnel to port 60000 on the client.
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -R 60000:localhost:60000 ubuntu#54.x.x.x
We kept each of these sessions open, as the JMeter server needs to be able to deliver the test results to the client.
Then we set up the JVM_ARGS environment variable on the client, in the jmeter.sh file in the /bin folder:
export JVM_ARGS="-Djava.rmi.server.hostname=localhost"
By doing this, JMeter will tell the servers to connect to localhost:60000 for delivering their results. This ends up being tunneled back to the client.
The SSH connections to the servers kept dropping after staying idle for a little bit. To prevent that from happening, we added a parameter to each of the SSH tunnel set up directing the client to wait 60 seconds before sending a null packet to the server to keep the connection alive:
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -o ServerAliveInterval=60 -R 60000:localhost:60000 ubuntu#54.x.x.x
(.ssh/config version of all required SSH settings:
Host 54.x.x.x
HostName 54.x.x.x
Port 22
User ubuntu
ServerAliveInterval 60
RemoteForward 127.0.0.1:60000 127.0.0.1:60000
IdentityFile ~/.ssh/54-x-x-x.us-east.pem
IdentitiesOnly yes
Just use ssh 54.x.x.x after setting this up.
)
I just went though this on openstack and found the same issues... no idea why the jmeter remoting documentation only covers half the required steps. You can do it without tunnels or touching the properties files.
You need
All nodes to advertise their public IP - on AWS/OS this defaults to the private IP
Ingress rules for the RMI port which defaults to 1099 - I use this
Ingress rules for the RMI "local" port which defaults to dynamic. Below I use 4001 for the client and 4000 for servers. The port can be the same but note the properties are different.
If you are using your workstation as the client you probably still need tunnels. Above Archana Aggarwal has good tips for tunnels.
Remote servers
Set java.rmi.server.hostname and server.rmi.localport inline or in the properties file.
jmeter-server -Djava.rmi.server.hostname=publicip -Dserver.rmi.localport=4000
Sneaky server on client
You can also run one on the same machine as the client. For clarity I've set java.rmi.server.hostname but left server.rmi.localport as dynamic
jmeter-server -Djava.rmi.server.hostname=localip
Client
Set java.rmi.server.hostname and client.rmi.localport inline or in the properties file. Use -R etc like so:
jmeter -n -t Test.jmx -Rremotepublicip1,remotepublicip2 -Djava.rmi.server.hostname=clientpublicip -Dclient.rmi.localport=4001 -GmypropA=1 -GmypropB=2 -lresults.jtl
When you go for distributed testing using JMeter in AWS, I would suggest you to use docker - which will help us with jmeter test infrastructure very quickly. This way we can also ensure that same version of java and jmeter are installed in all the instances of amazon which is very important of JMeter distributed testing.
Ensure that - you set below properties and ports are open for jmeter-server. [they do not have to be 1099,50000 exactly]
server.rmi.localport=50000
server_port=1099
java.rmi.server.hostname=SERVER_IP
for client
client.rmi.localport=60000
java.rmi.server.hostname=SERVER_IP - this step is very important as the container in aws instance will have their own IP address in the docker network - so master and slave can not communicate. So we explicitly set this property
More info:
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker-in-aws/

Connecting to EC2 Django development Server

I am new to EC2 and web development. Currently I have a Linux EC2 instance running, and have installed Django. I am creating a test project before I start on my real project and tried running a Django test server.
This is my output in the shell:
python manage.py runserver ec2-###-##-##-##.compute-1.amazonaws.com:8000
Validating models...
0 errors found
Django version 1.3, using settings 'testsite.settings'
Development server is running at http://ec2-###-##-##-##.compute-1.amazonaws.com:8000/
Quit the server with CONTROL-C.
To test that it is wroking I have tried visiting: ec2-###-##-##-##.compute-1.amazonaws.com:8000 but I always get a "Cannot connect" message from my browser.
Whenever I do this lcoally on my computer however I do successfully get to the DJango development home page at 127.0.0.1:8000. Could someone help me figure out what I am doing wrong / might be missing when I am doing this on my EC2 instance as opposed to my own laptop?
Using an ec-2 instance with Ubuntu, I found that specifying 0.0.0.0:8000 worked:
$python manage.py runserver 0.0.0.0:8000
Of course 8000 does need to be opened for TCP in your security group settings.
You probably don't have port 8000 open on the firewall. Check which security group your instance is running (probably "default") and check the rules it is running. You will probably find that port 8000 is not listed.
1) You need to make sure port 8000 is added as a Custom TCP Rule into your Security Group list of inbound ports
2) Odds are that the IP that you see listed on your AWS Console, which is associated to your instance is a PUBLIC IP OR a PUBLIC Domain Name(i.e. ec2-###-##-##-##.compute-1.amazonaws.com or 174.101.122.132) that Amazon assigns.
2.1) If it is a public IP, then your instance has no way of knowing what the Public IP assigned to it is, rather it will only know the its assigned Local IP.
2.2) To get your Local IP on a Linux System, type:
$ ifconfig
Then look at the eth0 Data and you'll see an IP next to "inet addr" of the format xxx.xxx.xxx.xxx (e.g. 10.10.12.135) This is your Local IP
3) To successfully runserver you can do one of the following two:
$ python manage.py runserver <LOCAL IP>:8000
or
$ python manage.py runserver 0.0.0.0:8000
** Option Two also works great as Ernest Ezis mentioned in his answer.
EDIT : From The Django Book : "The IP address 0.0.0.0 tells the server to listen on any network interface"
** My theory of Public IP could be wrong, since I'm not sure how Amazon assigns IPs. I'd appreciate being corrected.
I was having the same problem. But I was running RHEL on EC2. Besides from adding a rule to security group, I had to manually add a port to firewalld.
firewall-cmd --permanent --add-port=8000/tcp
firewall-cmd --reload
That worked for me! (Although no idea why I had to do that)
Yes, if you use quick launch EC2 option, you should add new HTTP rule (just as it appears on the list) to run a development server.
Adding a security group with the inbound rules as follows usually does the trick unless you have something else misconfigured. The port range specifies which port you want to allow incoming traffic on.
HTTP access would need 80
HTTP access over port 8000 would need 8000
SSH to server would need 22
HTTPS would need 443