I'm trying to connect to a server running on a guest OS in VirtualBox. VirtualBox is configured to assign a static IP to the guest. However I'm pretty sure that the IP it assigns actually belongs to the host. If I have an nginx server running on the host then requests to the vboxnet1 IP are intercepted by the host serve and never reach the guest.
Both the host and guest are Debian.
Also, I get the same result with and without the VirtualBox DHCP server enabled.
Here's the VirtualBox network settings (can't embed images with <10 rep...sigh):
And the VM network settings:
I've also tried with different IP addresses for the host, no change.
ifconfig on host:
$ ifconfig
eth0 Link encap:Ethernet HWaddr 00:90:f5:e8:b0:e0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:183363 errors:0 dropped:0 overruns:0 frame:0
TX packets:183363 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:70022881 (70.0 MB) TX bytes:70022881 (70.0 MB)
vboxnet1 Link encap:Ethernet HWaddr 0a:00:27:00:00:01
inet addr:192.168.66.1 Bcast:192.168.66.255 Mask:255.255.255.0
inet6 addr: fe80::800:27ff:fe00:1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:2545 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:308371 (308.3 KB)
wlan0 Link encap:Ethernet HWaddr 24:fd:52:c0:1b:b1
inet addr:192.168.2.106 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::26fd:52ff:fec0:1bb1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:11724555 errors:0 dropped:0 overruns:0 frame:0
TX packets:7429276 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16204472393 (16.2 GB) TX bytes:1222715861 (1.2 GB)
iptables on host:
$ sudo iptables -L
[sudo] password for aidan:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Here's an nmap comparing the host IP and the 'guest' IP:
$ nmap 192.168.66.1
Starting Nmap 6.40 ( http://nmap.org ) at 2015-07-22 13:28 CEST
Nmap scan report for 192.168.66.1
Host is up (0.00015s latency).
Not shown: 995 closed ports
PORT STATE SERVICE
139/tcp open netbios-ssn
445/tcp open microsoft-ds
902/tcp open iss-realsecure
3128/tcp open squid-http
5050/tcp open mmcc
Nmap done: 1 IP address (1 host up) scanned in 0.16 seconds
$ nmap 192.168.2.106
Starting Nmap 6.40 ( http://nmap.org ) at 2015-07-22 13:28 CEST
Nmap scan report for 192.168.2.106
Host is up (0.00015s latency).
Not shown: 995 closed ports
PORT STATE SERVICE
139/tcp open netbios-ssn
445/tcp open microsoft-ds
902/tcp open iss-realsecure
3128/tcp open squid-http
5050/tcp open mmcc
Nmap done: 1 IP address (1 host up) scanned in 0.17 seconds
Related
I am having issues getting Localstack to work on my Windows 10 Home System. I have been running Docker Toolbox without any issues (for other things).
I have tried invoking Localstack in multiple ways (e.g. via docker-compose.yml or by directly downloading it from Docker Hub) but I am always getting the same result i.e. the Container says that Localstack is ready but when I try to connect to the Localstack services on my browser (e.g. http://localhost:4566) I get the following error:
This site can’t be reached
localhost refused to connect error.
...
ERR_CONNECTION_REFUSED
Reproduced below is one sequence of steps that I have taken to attempt to invoke Localstack (unsuccessfully I must add).
Command invoked: docker run -it --name localstack localstack/localstack:latest
Message trace....
Waiting for all LocalStack services to be ready
2020-05-04 20:02:27,144 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
2020-05-04 20:02:27,154 INFO supervisord started with pid 13
2020-05-04 20:02:28,163 INFO spawned: 'dashboard' with pid 19
2020-05-04 20:02:28,173 INFO spawned: 'infra' with pid 20
2020-05-04 20:02:28,242 INFO success: dashboard entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
(. .venv/bin/activate; bin/localstack web)
(. .venv/bin/activate; exec bin/localstack start --host)
2020-05-04 20:02:29,246 INFO success: infra entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
Waiting for all LocalStack services to be ready
LocalStack version: 0.11.0
LocalStack version: 0.11.0
Starting local dev environment. CTRL-C to quit.
!WARNING! - Looks like you have configured $LAMBDA_REMOTE_DOCKER=1 - please make sure to configure $HOST_TMP_FOLDER to point to your host's $TMPDIR
Waiting for all LocalStack services to be ready
2020-05-04T20:02:42:INFO:localstack.utils.common: Unable to store key/cert files for custom SSL certificate: [Errno 13] Permission denied: '/tmp/localstack/server.test.pem.key'
2020-05-04T20:02:42:INFO:localstack.services.install: Downloading and installing local KMS server. This may take some time.
Waiting for all LocalStack services to be ready
Starting edge router (https port 4566)...
Starting mock API Gateway service in http ports 4566 (recommended) and 4567 (deprecated)...
2020-05-04T20:02:48:INFO:localstack.multiserver: Starting multi API server process on port 51492
Starting mock CloudFormation service in http ports 4566 (recommended) and 4581 (deprecated)...
Starting mock CloudWatch service in http ports 4566 (recommended) and 4582 (deprecated)...
Starting mock DynamoDB service in http ports 4566 (recommended) and 4569 (deprecated)...
Starting mock DynamoDB Streams service in http ports 4566 (recommended) and 4570 (deprecated)...
Starting mock EC2 service in http ports 4566 (recommended) and 4597 (deprecated)...
Starting mock ES service in http ports 4566 (recommended) and 4578 (deprecated)...
Starting mock Firehose service in http ports 4566 (recommended) and 4573 (deprecated)...
Starting mock IAM service in http ports 4566 (recommended) and 4593 (deprecated)...
Starting mock STS service in http ports 4566 (recommended) and 4592 (deprecated)...
Starting mock Kinesis service in http ports 4566 (recommended) and 4568 (deprecated)...
Starting mock KMS service in http ports 4566 (recommended) and 4599 (deprecated)...
Starting mock Lambda service in http ports 4566 (recommended) and 4574 (deprecated)...
Starting mock CloudWatch Logs service in http ports 4566 (recommended) and 4586 (deprecated)...
Starting mock Redshift service in http ports 4566 (recommended) and 4577 (deprecated)...
Starting mock Route53 service in http ports 4566 (recommended) and 4580 (deprecated)...
Starting mock S3 service in http ports 4566 (recommended) and 4572 (deprecated)...
Starting mock Secrets Manager service in http ports 4566 (recommended) and 4584 (deprecated)...
Starting mock SES service in http ports 4566 (recommended) and 4579 (deprecated)...
Starting mock SNS service in http ports 4566 (recommended) and 4575 (deprecated)...
Starting mock SQS service in http ports 4566 (recommended) and 4576 (deprecated)...
Starting mock SSM service in http ports 4566 (recommended) and 4583 (deprecated)...
Starting mock Cloudwatch Events service in http ports 4566 (recommended) and 4587 (deprecated)...
Starting mock StepFunctions service in http ports 4566 (recommended) and 4585 (deprecated)...
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Waiting for all LocalStack services to be ready
Ready.
Since I kept getting the ERR_CONNECTION_REFUSED from the browser, I detached the terminal (Ctrl P+Q) and ran the following command in the container:
netstat -a
The output of the command is as follows:
Active Connections
Proto Local Address Foreign Address State
TCP 0.0.0.0:135 Mywinmc:0 LISTENING
TCP 0.0.0.0:445 Mywinmc:0 LISTENING
TCP 0.0.0.0:2425 Mywinmc:0 LISTENING
TCP 0.0.0.0:8092 Mywinmc:0 LISTENING
TCP 0.0.0.0:17500 Mywinmc:0 LISTENING
TCP 0.0.0.0:49664 Mywinmc:0 LISTENING
TCP 0.0.0.0:49665 Mywinmc:0 LISTENING
TCP 0.0.0.0:49666 Mywinmc:0 LISTENING
TCP 0.0.0.0:49667 Mywinmc:0 LISTENING
TCP 0.0.0.0:49668 Mywinmc:0 LISTENING
TCP 0.0.0.0:49673 Mywinmc:0 LISTENING
TCP 0.0.0.0:65530 Mywinmc:0 LISTENING
TCP 127.0.0.1:843 Mywinmc:0 LISTENING
TCP 127.0.0.1:5354 Mywinmc:0 LISTENING
TCP 127.0.0.1:5354 Mywinmc:49669 ESTABLISHED
TCP 127.0.0.1:5354 Mywinmc:49670 ESTABLISHED
TCP 127.0.0.1:17600 Mywinmc:0 LISTENING
TCP 127.0.0.1:27015 Mywinmc:0 LISTENING
TCP 127.0.0.1:27015 Mywinmc:50106 ESTABLISHED
TCP 127.0.0.1:44430 Mywinmc:0 LISTENING
TCP 127.0.0.1:49669 Mywinmc:5354 ESTABLISHED
TCP 127.0.0.1:49670 Mywinmc:5354 ESTABLISHED
TCP 127.0.0.1:50106 Mywinmc:27015 ESTABLISHED
TCP 127.0.0.1:50362 Mywinmc:0 LISTENING
TCP 127.0.0.1:52800 Mywinmc:52801 ESTABLISHED
TCP 127.0.0.1:52801 Mywinmc:52800 ESTABLISHED
TCP 127.0.0.1:52805 Mywinmc:52806 ESTABLISHED
TCP 127.0.0.1:52806 Mywinmc:52805 ESTABLISHED
TCP 192.168.1.13:139 Mywinmc:0 LISTENING
TCP 192.168.1.13:50247 20.185.212.106:https ESTABLISHED
TCP 192.168.1.13:51941 104.22.5.207:https ESTABLISHED
TCP 192.168.1.13:51949 server-13-249-79-178:https ESTABLISHED
TCP 192.168.1.13:51954 104.36.115.111:https ESTABLISHED
TCP 192.168.1.13:51963 server-13-249-75-45:https ESTABLISHED
TCP 192.168.1.13:52483 8.43.72.41:https ESTABLISHED
TCP 192.168.1.13:52486 104.17.119.107:https ESTABLISHED
TCP 192.168.1.13:52490 ip-185-184-8-30:https ESTABLISHED
TCP 192.168.1.13:53474 52.226.111.32:https ESTABLISHED
TCP 192.168.1.13:53665 ec2-34-194-118-104:https CLOSE_WAIT
TCP 192.168.1.13:54028 104.26.8.27:https ESTABLISHED
TCP 192.168.1.13:54104 bam-8:https ESTABLISHED
TCP 192.168.1.13:54228 30:https ESTABLISHED
TCP 192.168.1.13:54261 139:https ESTABLISHED
TCP 192.168.1.13:54265 151.101.49.253:https ESTABLISHED
TCP 192.168.1.13:54266 a-0001:https ESTABLISHED
TCP 192.168.1.13:54269 49:https ESTABLISHED
TCP 192.168.1.13:54277 49:https ESTABLISHED
TCP 192.168.1.13:54281 49:https ESTABLISHED
TCP 192.168.1.13:54289 194:https ESTABLISHED
TCP 192.168.1.13:54355 162.125.35.135:https CLOSE_WAIT
TCP 192.168.1.13:54378 162.125.8.13:https CLOSE_WAIT
TCP 192.168.1.13:54406 20.185.212.106:https ESTABLISHED
TCP 192.168.1.13:54419 162.125.8.7:https CLOSE_WAIT
TCP 192.168.1.13:54421 162.125.19.131:https ESTABLISHED
TCP 192.168.1.13:54422 152.199.6.14:https TIME_WAIT
TCP 192.168.1.13:54424 152.199.5.3:https TIME_WAIT
TCP 192.168.1.13:54425 ec2-3-94-69-170:https TIME_WAIT
TCP 192.168.1.13:54429 server-143-204-160-19:https ESTABLISHED
TCP 192.168.1.13:54430 a23-193-18-78:https ESTABLISHED
TCP 192.168.1.13:54440 a23-193-18-78:https ESTABLISHED
TCP 192.168.1.13:54444 ec2-54-162-73-57:https ESTABLISHED
TCP 192.168.1.13:54447 a23-67-241-31:https ESTABLISHED
TCP 192.168.1.13:54470 server-13-249-79-42:https ESTABLISHED
TCP 192.168.1.13:54474 104.16.68.69:https ESTABLISHED
TCP 192.168.1.13:54478 a23-199-248-26:https ESTABLISHED
TCP 192.168.1.13:54498 185.167.164.39:https TIME_WAIT
TCP 192.168.1.13:54504 93.184.215.201:https ESTABLISHED
TCP 192.168.1.13:54509 lb-140-82-114-3-iad:https TIME_WAIT
TCP 192.168.1.13:54510 151.101.48.133:https TIME_WAIT
TCP 192.168.1.13:54513 s3:https TIME_WAIT
TCP 192.168.1.13:54515 104.16.133.229:https TIME_WAIT
TCP 192.168.1.13:54516 server-13-249-79-31:https TIME_WAIT
TCP 192.168.1.13:54520 ec2-3-224-32-104:https TIME_WAIT
TCP 192.168.1.13:54526 192.184.68.146:https CLOSE_WAIT
TCP 192.168.1.13:54527 185.167.164.39:https TIME_WAIT
TCP 192.168.1.13:54528 ec2-3-217-197-240:https CLOSE_WAIT
TCP 192.168.1.13:54529 ec2-3-217-197-240:https CLOSE_WAIT
TCP 192.168.1.13:54530 ec2-54-69-254-184:https CLOSE_WAIT
TCP 192.168.1.13:54533 54.239.17.112:https ESTABLISHED
TCP 192.168.1.13:54537 232:https ESTABLISHED
TCP 192.168.1.13:54538 r-17-48-62-5:https TIME_WAIT
TCP 192.168.1.13:60684 40.83.21.197:https ESTABLISHED
TCP 192.168.1.13:60695 52.242.211.89:https ESTABLISHED
TCP 192.168.1.13:60696 52.242.211.89:https ESTABLISHED
TCP 192.168.1.13:60780 ec2-3-224-94-60:https ESTABLISHED
TCP 192.168.1.13:61712 whatsapp-cdn-shv-01-dfw5:https ESTABLISHED
TCP 192.168.1.13:63209 9:https ESTABLISHED
TCP 192.168.1.13:63395 Chromecast:8009 ESTABLISHED
TCP 192.168.1.13:63705 on-in-f188:5228 ESTABLISHED
TCP 192.168.1.13:63706 e1:https ESTABLISHED
TCP 192.168.1.13:63720 108-174-10-10:https ESTABLISHED
TCP 192.168.56.1:139 Mywinmc:0 LISTENING
TCP 192.168.99.1:139 Mywinmc:0 LISTENING
TCP [::]:135 Mywinmc:0 LISTENING
TCP [::]:445 Mywinmc:0 LISTENING
TCP [::]:8092 Mywinmc:0 LISTENING
TCP [::]:17500 Mywinmc:0 LISTENING
TCP [::]:49664 Mywinmc:0 LISTENING
TCP [::]:49665 Mywinmc:0 LISTENING
TCP [::]:49666 Mywinmc:0 LISTENING
TCP [::]:49667 Mywinmc:0 LISTENING
TCP [::]:49668 Mywinmc:0 LISTENING
TCP [::]:49673 Mywinmc:0 LISTENING
TCP [::1]:49770 Mywinmc:0 LISTENING
UDP 0.0.0.0:500 *:*
UDP 0.0.0.0:2425 *:*
UDP 0.0.0.0:3702 *:*
UDP 0.0.0.0:3702 *:*
UDP 0.0.0.0:4500 *:*
UDP 0.0.0.0:5050 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5353 *:*
UDP 0.0.0.0:5355 *:*
UDP 0.0.0.0:17500 *:*
UDP 0.0.0.0:49640 *:*
UDP 0.0.0.0:49774 *:*
UDP 0.0.0.0:54925 *:*
UDP 0.0.0.0:55066 *:*
UDP 0.0.0.0:55739 *:*
UDP 0.0.0.0:57602 *:*
UDP 0.0.0.0:57603 *:*
UDP 0.0.0.0:57975 *:*
UDP 0.0.0.0:58140 *:*
UDP 0.0.0.0:58995 *:*
UDP 0.0.0.0:59072 *:*
UDP 0.0.0.0:59303 *:*
UDP 0.0.0.0:59698 *:*
UDP 0.0.0.0:60343 *:*
UDP 0.0.0.0:60813 *:*
UDP 127.0.0.1:1900 *:*
UDP 127.0.0.1:49677 *:*
UDP 127.0.0.1:49678 *:*
UDP 127.0.0.1:50019 *:*
UDP 127.0.0.1:58994 *:*
UDP 127.0.0.1:59070 *:*
UDP 127.0.0.1:62643 *:*
UDP 127.0.0.1:64870 *:*
UDP 127.0.0.1:64871 *:*
UDP 192.168.1.13:137 *:*
UDP 192.168.1.13:138 *:*
UDP 192.168.1.13:1900 *:*
UDP 192.168.1.13:2177 *:*
UDP 192.168.1.13:5353 *:*
UDP 192.168.1.13:50018 *:*
UDP 192.168.56.1:137 *:*
UDP 192.168.56.1:138 *:*
UDP 192.168.56.1:1900 *:*
UDP 192.168.56.1:2177 *:*
UDP 192.168.56.1:5353 *:*
UDP 192.168.56.1:50016 *:*
UDP 192.168.99.1:137 *:*
UDP 192.168.99.1:138 *:*
UDP 192.168.99.1:1900 *:*
UDP 192.168.99.1:2177 *:*
UDP 192.168.99.1:5353 *:*
UDP 192.168.99.1:50017 *:*
UDP [::]:500 *:*
UDP [::]:3702 *:*
UDP [::]:3702 *:*
UDP [::]:4500 *:*
UDP [::]:5353 *:*
UDP [::]:5353 *:*
UDP [::]:5353 *:*
UDP [::]:5353 *:*
UDP [::]:5353 *:*
UDP [::]:5353 *:*
UDP [::]:5353 *:*
UDP [::]:5353 *:*
UDP [::]:5353 *:*
UDP [::]:5355 *:*
UDP [::]:49775 *:*
UDP [::]:59303 *:*
UDP [::]:59699 *:*
UDP [::1]:1900 *:*
UDP [::1]:5353 *:*
UDP [::1]:50015 *:*
UDP [fe80::6c83:b041:8dfb:82dd%6]:1900 *:*
UDP [fe80::6c83:b041:8dfb:82dd%6]:2177 *:*
UDP [fe80::6c83:b041:8dfb:82dd%6]:50014 *:*
UDP [fe80::9cd1:1694:a63e:e0c3%2]:1900 *:*
UDP [fe80::9cd1:1694:a63e:e0c3%2]:2177 *:*
UDP [fe80::9cd1:1694:a63e:e0c3%2]:50013 *:*
UDP [fe80::e8c8:ff57:e70f:27e1%19]:546 *:*
UDP [fe80::e8c8:ff57:e70f:27e1%19]:1900 *:*
UDP [fe80::e8c8:ff57:e70f:27e1%19]:2177 *:*
UDP [fe80::e8c8:ff57:e70f:27e1%19]:50012 *:*
It is clear that mock services are not running in the container, despite what the message trace of docker run -it --name localstack localstack/localstack:latest suggests (as shown above).
Even the following command did not work:
docker run -it --name localstack2 -e HOST_TMP_FOLDER="/tmp" localstack/localstack:latest
Are your ports mapped to the host?
docker run -it --name localstack localstack/localstack:latest -p 4567-4584:4567-4584
try adding the -p param to your docker command
So I couldnt figure out why I couldnt connect to my containers from a public IP until I found out what IP the docker ports were listening on...if you see ifconfig is showing 172...this is not valid within my vpc...you can see below im not using any 172 within my vpc...so..im not sure where this is getting this from...should I create a new subnet in a new vpn and just make an ami and launch it in new vpc with conforming subnet? can I change docker ip/port its listening on?
ifconfig
br-387bdd8b6fc4 Link encap:Ethernet HWaddr 02:42:69:A3:BA:A9
inet addr:172.18.0.1 Bcast:172.18.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:69ff:fea3:baa9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:114269 errors:0 dropped:0 overruns:0 frame:0
TX packets:83675 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:11431231 (10.9 MiB) TX bytes:36504449 (34.8 MiB)
docker0 Link encap:Ethernet HWaddr 02:42:65:A6:7C:B3
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
eth0 Link encap:Ethernet HWaddr 02:77:F6:7A:50:A6
inet addr:10.0.140.193 Bcast:10.0.143.255 Mask:255.255.240.0
inet6 addr: fe80::77:f6ff:fe7a:50a6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:153720 errors:0 dropped:0 overruns:0 frame:0
TX packets:65773 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:209782581 (200.0 MiB) TX bytes:5618173 (5.3 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:30 errors:0 dropped:0 overruns:0 frame:0
TX packets:30 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2066 (2.0 KiB) TX bytes:2066 (2.0 KiB)
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.18.0.11 tcp dpt:389
ACCEPT tcp -- 0.0.0.0/0 172.18.0.13 tcp dpt:9043
ACCEPT tcp -- 0.0.0.0/0 172.18.0.13 tcp dpt:7777
ACCEPT tcp -- 0.0.0.0/0 172.18.0.3 tcp dpt:9443
ACCEPT tcp -- 0.0.0.0/0 172.18.0.7 tcp dpt:443
ACCEPT tcp -- 0.0.0.0/0 172.18.0.8 tcp dpt:443
ACCEPT tcp -- 0.0.0.0/0 172.18.0.9 tcp dpt:443
DockerSubnet1-Public 10.0.1.0/24
DockerSubnet2-Public 10.0.2.0/24
DockerSubnet3-Private 10.0.3.0/24
DockerSubnet4-Private 10.0.4.0/24
Private subnet 1A 10.0.0.0/19
Private subnet 2A 10.0.32.0/19
Public subnet 1 10.0.128.0/20
Public subnet 2 10.0.144.0/20
The standard way to use Docker networking is with the docker run -p command-line option. If you run:
docker run -p 8888:80 myimage
Docker will automatically set up a port forward from port 8888 on the host to port 80 in the container.
If your host has multiple interfaces (you hint at a "public IP", though it's not shown separately in your ifconfig output) you can set it to listen on only one of those by adding an IP address
docker run -p 10.0.140.193:8888:80 myimage
The Docker-internal 172.18.0.0/16 addresses are essentially useless. They're an important implementation detail when talking between containers, but Docker provides an internal DNS service that will resolve container names to internal IP addresses. In figuring out how to talk to a container from "outside", you don't need these IP addresses.
The terminology in your question hints strongly at Amazon Web Services. A common problem here is that your EC2 instance is running under a security group (network-level firewall) that isn't allowing the inbound connection.
I never did figure out why..i did allow all traffic inbound and outbound in the security groups and network acls etc etc..I made an ami out of my instance copied it over to another region with a newly built vpc and deployed there. It works!! Chalking it up to AWS VPC.
Thanks for the clarification of 172.x I did not know that was between the docker containers...makes sense now.
Currently, with AWS ECS combined with an internal NLB it is impossible to have inter-system communication. Meaning container 1 (on instance 1) -> internal NLB -> container 2 (on instance 1). Because the source IP address does not change and stays the same as the destination address the ECS instance drops this traffic.
I found a thread on the AWS forums here https://forums.aws.amazon.com/message.jspa?messageID=806936#806936 explaining my problem.
I've contacted AWS Support and they stated to have a fix on their roadmap but they cannot tell me when this will be fixed so I am looking into ways to solve it on my own until AWS have fixed it permanently.
It must be fixable by altering the ECS iptables but I have not enough knowledge to completely read their iptables setup and to understand what needs to be changed to fix this.
iptabels-save output:
:DOCKER - [0:0]
:DOCKER-ISOLATION - [0:0]
:DOCKER-USER - [0:0]
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5000 -j ACCEPT
-A DOCKER -d 172.17.0.5/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8086 -j ACCEPT
-A DOCKER-ISOLATION -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Wed Jan 31 22:19:47 2018
# Generated by iptables-save v1.4.18 on Wed Jan 31 22:19:47 2018
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [38:2974]
:POSTROUTING ACCEPT [7147:429514]
:DOCKER - [0:0]
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A PREROUTING -d 169.254.170.2/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 127.0.0.1:51679
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -d 169.254.170.2/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 51679
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.3/32 -d 172.17.0.3/32 -p tcp -m tcp --dport 5000 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 5000 -j MASQUERADE
-A POSTROUTING -s 172.17.0.5/32 -d 172.17.0.5/32 -p tcp -m tcp --dport 8086 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 32769 -j DNAT --to-destination 172.17.0.3:5000
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 32777 -j DNAT --to-destination 172.17.0.2:5000
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 32792 -j DNAT --to-destination 172.17.0.5:8086
COMMIT
# Completed on Wed Jan 31 22:19:47 2018
ip a:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc mq state UP group default qlen 1000
link/ether 0a:b4:86:0b:c0:c4 brd ff:ff:ff:ff:ff:ff
inet 10.12.80.181/26 brd 10.12.80.191 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::8b4:86ff:fe0b:c0c4/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ca:cf:36:ae brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:caff:fecf:36ae/64 scope link
valid_lft forever preferred_lft forever
7: vethbd1da82#if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 36:6d:d6:bd:d5:d8 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::346d:d6ff:febd:d5d8/64 scope link
valid_lft forever preferred_lft forever
27: vethc65a98f#if26: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether e6:cf:79:d4:aa:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::e4cf:79ff:fed4:aa7a/64 scope link
valid_lft forever preferred_lft forever
57: veth714e7ab#if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 1e:c2:a5:02:f6:ee brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::1cc2:a5ff:fe02:f6ee/64 scope link
valid_lft forever preferred_lft forever
I have no information about upcoming solutions, but I suspect any workaround will involve preventing an instance from connecting to itself and instead always connect to a different instance... or perhaps use the balancer's source address for hairpinned connections instead of the originating address.
The fundamental problem is this: the balancer works by integrating with the network infrastructure, and doing network address translation, altering the original target address on the way out, and the source address on the way back in, so that the instance in the target group sees the real source address of the client side, but not the other way around... but this is not compatible with asymmetric routing. When the instance ends up talking to itself, the route is quite asymmetric.
Assume the balancer is 172.30.1.100 and the instance is 172.30.2.200.
A TCP connection is initiated from 172.30.2.200 (instance) to 172.30.1.100 (balancer). The ports are not really important, but let's assume the source port is 49152 (ephemeral) and the balancer target port is 80 and the instance target port is 8080.
172.30.2.200:49152 > 172.30.1.100:80 SYN
The NLB is a NAT device, so this is translated:
172.30.2.200:49152 > 172.30.2.200:8080 SYN
This is sent back to the instance.
This already doesn't make sense, because the instance just got an incoming request from itself, from something external, even though it didn't make that request.
Assuming it responds, rather than dropping what is already a nonsense packet, now you have this:
172.30.2.200:8080 > 172.30.2.200:49152 SYN+ACK
If 172.30.2.200:49152 had actually sent a packet to 172.20.2.200:8080 it would respond with an ACK and the connection would be established.
But it didn't.
The next thing that happens should be something like this:
172.30.2.200:49152 > 172.30.2.200:8080 RST
Meanwhile, 172.30.2.200:49152 has heard nothing back from 172.30.1.100:80, so it will retry and then eventually give up: Connection timed out.
When the source and destination machine are different, NLB works because it's not a real (virtual) machine like those provided by ELB/ALB—it is something done by the network itself. That is the only possible explanation because those packets with translated addresses otherwise do make it back to the original machine with the NAT occurring in the reverse direction, and that could only happen if the VPC network were keeping state tables of these connections and translating them.
Note that in VPC, the default gateway isn't real. In fact, the subnets aren't real. The Ethernet network isn't real. (And none of this is a criticism. There's some utterly brilliant engineering in evidence here.) All of it is emulated by the software in the VPC network infrastructure. When two machines on the same subnet talk to each other directly... well, they don't.¹ They are talking over a software-defined network. As such, the network can see these packets and do the translation required by NLB, even when machines are on the same subnet.
But not when a machine is talking to itself, because when that happens, the traffic never appears on the wire—it remains inside the single VM, out of the reach of the VPC network infrastructure.
I don't believe an instance-based workaround is possible.
¹ They don't. A very interesting illustration of this is to monitor traffic on two instances on the same subnet with Wireshark. Open the security groups, then ping one instance from the other. The source machine sends an ARP request and appears to get an ARP response from the target... but there's no evidence of this ARP interaction on the target. That's because it doesn't happen. The network handles the ARP response for the target instance. This is part of the reason why it isn't possible to spoof one instance from another—packets that are forged are not forwarded by the network, because they are clearly not valid, and the network knows it. After that ARP occurs, the ping is normal. The traffic appears to go directly from instance to instance, based on the layer 2 headers, but that is not what actually occurs.
What happened
: It can't be connected to be made my service on Web browser.
What you expected to happen
: Connect to my service
How to reproduce it (as minimally and precisely as possible)
:
First, I made 'my-deploy.yaml' like this.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deploy-name
spec:
replicas: 3
template:
metadata:
labels:
app: my-deploy
spec:
containers:
- name: mycontainer
image: alicek106/composetest:balanced_web
ports:
- containerPort: 80
And then, I made 'my-service.yaml' like this
apiVersion: v1
kind: Service
metadata:
name: my-service-name
spec:
ports:
- name: my-deploy-svc
port: 8080
targetPort: 80
type: LoadBalancer
externalIPs:
- 104.196.161.33
selector:
app: my-deploy
So, I created the deployment and service,
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d
my-service-name LoadBalancer 10.106.31.254 104.196.161.33 8080:32508/TCP 5d
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
my-deploy-name 3 3 3 3 6d
and try to connect 104.196.161.33:8080 , 104.196.161.33:32508 on Chrome Browser. But It doesn't work.
What should I do?
Environment
:
Kubernetes version :
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration: VM on ubuntu 16.04.LTS
OS (e.g. from /etc/os-release): ubuntu 16.04.LTS
Kernel : Linux master 4.10.0-37-generic #41~16.04.1-Ubuntu SMP Fri Oct 6 22:42:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Install tools: Docker-CE v17.06
Others:
kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 6d v1.8.1
node1 Ready <none> 6d v1.8.1
node2 Ready <none> 6d v1.8.1
ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:ba:93:a2:f2
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
ens160 Link encap:Ethernet HWaddr 00:50:56:80:ab:14
inet addr:39.119.118.176 Bcast:39.119.118.255 Mask:255.255.255.128
inet6 addr: fe80::250:56ff:fe80:ab14/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9581474 errors:0 dropped:473 overruns:0 frame:0
TX packets:4928331 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1528509917 (1.5 GB) TX bytes:4020347835 (4.0 GB)
flannel.1 Link encap:Ethernet HWaddr c6:b5:ef:90:ea:8f
inet addr:10.244.0.0 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:184 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:44750027 errors:0 dropped:0 overruns:0 frame:0
TX packets:44750027 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:13786966091 (13.7 GB) TX bytes:13786966091 (13.7 GB)
※ P.S : Could you recommend the Web service example on docker & kubernetes to me?
In my case, external-IP was allocated by GCE automatically. It doesn't need to be set manually in yaml configuration. Thus, if you discovered that EXTERNAL-IP is status in output of command "kubectl get svc ${service-name}", it may means works what you intended.
(but I'm not sure whether specifying external-IP in configuration works..)
And as I know, service of LoadBalancer type works only in cloud integration, which supports such functionality.
PS. I guess you are trying to test contents in "Let's starting Docker" in Republic of Korea, if you so, kindly contact me by e-mail address or twitter :D
I can reply you directly because I'm author of that book.
In C++ UDP Socket port multiplexing, I found that using DNAT PREROUTING, I can redirect the packets for a particular UDP port and listen to packets being received on it.
iptables -t nat -A PREROUTING -i <iface> -p <proto> --dport <dport>
-j REDIRECT --to-port <newport>
Unfortunately this works ONLY for packets received at this port. How can I get the packets being sent from this port?