docker using ip not within vpc - amazon-web-services

So I couldnt figure out why I couldnt connect to my containers from a public IP until I found out what IP the docker ports were listening on...if you see ifconfig is showing 172...this is not valid within my vpc...you can see below im not using any 172 within my vpc...so..im not sure where this is getting this from...should I create a new subnet in a new vpn and just make an ami and launch it in new vpc with conforming subnet? can I change docker ip/port its listening on?
ifconfig
br-387bdd8b6fc4 Link encap:Ethernet HWaddr 02:42:69:A3:BA:A9
inet addr:172.18.0.1 Bcast:172.18.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:69ff:fea3:baa9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:114269 errors:0 dropped:0 overruns:0 frame:0
TX packets:83675 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:11431231 (10.9 MiB) TX bytes:36504449 (34.8 MiB)
docker0 Link encap:Ethernet HWaddr 02:42:65:A6:7C:B3
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
eth0 Link encap:Ethernet HWaddr 02:77:F6:7A:50:A6
inet addr:10.0.140.193 Bcast:10.0.143.255 Mask:255.255.240.0
inet6 addr: fe80::77:f6ff:fe7a:50a6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:153720 errors:0 dropped:0 overruns:0 frame:0
TX packets:65773 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:209782581 (200.0 MiB) TX bytes:5618173 (5.3 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:30 errors:0 dropped:0 overruns:0 frame:0
TX packets:30 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2066 (2.0 KiB) TX bytes:2066 (2.0 KiB)
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.18.0.11 tcp dpt:389
ACCEPT tcp -- 0.0.0.0/0 172.18.0.13 tcp dpt:9043
ACCEPT tcp -- 0.0.0.0/0 172.18.0.13 tcp dpt:7777
ACCEPT tcp -- 0.0.0.0/0 172.18.0.3 tcp dpt:9443
ACCEPT tcp -- 0.0.0.0/0 172.18.0.7 tcp dpt:443
ACCEPT tcp -- 0.0.0.0/0 172.18.0.8 tcp dpt:443
ACCEPT tcp -- 0.0.0.0/0 172.18.0.9 tcp dpt:443
DockerSubnet1-Public 10.0.1.0/24
DockerSubnet2-Public 10.0.2.0/24
DockerSubnet3-Private 10.0.3.0/24
DockerSubnet4-Private 10.0.4.0/24
Private subnet 1A 10.0.0.0/19
Private subnet 2A 10.0.32.0/19
Public subnet 1 10.0.128.0/20
Public subnet 2 10.0.144.0/20

The standard way to use Docker networking is with the docker run -p command-line option. If you run:
docker run -p 8888:80 myimage
Docker will automatically set up a port forward from port 8888 on the host to port 80 in the container.
If your host has multiple interfaces (you hint at a "public IP", though it's not shown separately in your ifconfig output) you can set it to listen on only one of those by adding an IP address
docker run -p 10.0.140.193:8888:80 myimage
The Docker-internal 172.18.0.0/16 addresses are essentially useless. They're an important implementation detail when talking between containers, but Docker provides an internal DNS service that will resolve container names to internal IP addresses. In figuring out how to talk to a container from "outside", you don't need these IP addresses.
The terminology in your question hints strongly at Amazon Web Services. A common problem here is that your EC2 instance is running under a security group (network-level firewall) that isn't allowing the inbound connection.

I never did figure out why..i did allow all traffic inbound and outbound in the security groups and network acls etc etc..I made an ami out of my instance copied it over to another region with a newly built vpc and deployed there. It works!! Chalking it up to AWS VPC.
Thanks for the clarification of 172.x I did not know that was between the docker containers...makes sense now.

Related

Squid SslBump Peek and Slice leads to OpenSSL SSL_connect: SSL_ERROR_SYSCALL for HTTPS connections

I'm trying to setup DNS filtering using a transparent proxy using Squid v3.5 on AWS EC2. It works fine for HTTP traffic, but not for HTTPS traffic. For HTTPS traffic, I observe the following:
For websites not in the allow-list, I get an immediate curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to google.com:443
For websites in the allow-list, the connection hangs at the TLSv1.2 (OUT), TLS handshake, Client hello (1): state for a long time. Then finally throws a curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to example.com:443
Squid Settings
Squid version is 3.5.20 and compiled --with-openssl (I installed from yum and didn't compile myself). The full output of squid -v is shown below:
Squid Cache: Version 3.5.20
Service Name: squid
configure options: '--build=x86_64-koji-linux-gnu' '--host=x86_64-koji-linux-gnu' '--program-prefix=' '--prefix=/usr' '--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' '--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' '--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--sharedstatedir=/var/lib' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--disable-strict-error-checking' '--exec_prefix=/usr' '--libexecdir=/usr/lib64/squid' '--localstatedir=/var' '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' '--with-logdir=$(localstatedir)/log/squid' '--with-pidfile=$(localstatedir)/run/squid.pid' '--disable-dependency-tracking' '--enable-eui' '--enable-follow-x-forwarded-for' '--enable-auth' '--enable-auth-basic=DB,LDAP,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB,SMB_LM,getpwnam' '--enable-auth-ntlm=smb_lm,fake' '--enable-auth-digest=file,LDAP,eDirectory' '--enable-auth-negotiate=kerberos' '--enable-external-acl-helpers=file_userip,LDAP_group,time_quota,session,unix_group,wbinfo_group,kerberos_ldap_group' '--enable-cache-digests' '--enable-cachemgr-hostname=localhost' '--enable-delay-pools' '--enable-epoll' '--enable-ident-lookups' '--enable-linux-netfilter' '--enable-removal-policies=heap,lru' '--enable-snmp' '--enable-ssl-crtd' '--enable-storeio=aufs,diskd,rock,ufs' '--enable-wccpv2' '--enable-esi' '--enable-ecap' '--with-aio' '--with-default-user=squid' '--with-dl' '--with-openssl' '--with-pthreads' '--disable-arch-native' 'build_alias=x86_64-koji-linux-gnu' 'host_alias=x86_64-koji-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fpie' 'LDFLAGS=-Wl,-z,relro -pie -Wl,-z,relro -Wl,-z,now' 'CXXFLAGS=-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic -fpie' 'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'
squid.conf is shown below:
visible_hostname squid
cache deny all
# Log format and rotation
logformat squid %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru %ssl::>sni %Sh/%<a %mt
logfile_rotate 10
debug_options rotate=10
# Handle HTTP requests
http_port 3128
http_port 3129 intercept
# Handle HTTPS requests
https_port 3130 cert=/etc/squid/ssl/squid.pem ssl-bump intercept
acl SSL_port port 443
http_access allow SSL_port
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump peek step1 all
# Deny requests to proxy instance metadata
acl instance_metadata dst 169.254.169.254
http_access deny instance_metadata
# Filter HTTP requests based on the allowlist
acl allowed_http_sites dstdomain "/etc/squid/allowlist.txt"
http_access allow allowed_http_sites
# Filter HTTPS requests based on the allowlist
acl allowed_https_sites ssl::server_name "/etc/squid/allowlist.txt"
ssl_bump peek step2 allowed_https_sites
ssl_bump splice step3 allowed_https_sites
ssl_bump terminate step2 all
http_access deny all
iptables:
iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3129
iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3130
AWS EC2/VPC Settings
The squid proxy EC2 instance is in a public subnet
The squid proxy EC2 instance has source/destination check disabled
The client EC2 instance is in another public subnet
The security groups of these EC2 instances allow all inbound traffic internally (based on private IP) and allow all outbound traffic.
Route tables:
Routing for squid's subnet: https://imgur.com/a/78RaBiJ
Routing for the client subnet: https://imgur.com/a/DRwrQVT (the ENI ID is for the squid proxy's EC2)
iptables rules as shown will match any http/https traffic on 80 and 443 ports, including connections squid initiates itself to target resources.
You need to add "! --uid-owner {squid uid}" or "--uid-owner {initiator uid}" option to iptables command line, so it will match "not squid initiated connections" or "specific processes initiated connections identified by {initiator uid}" only.
So they should be as below:
iptables -t nat -A PREROUTING -p tcp ! --uid-owner {squid uid} --dport 80 -j REDIRECT --to-port 3129
iptables -t nat -A PREROUTING -p tcp ! --uid-owner {squid uid} --dport 443 -j REDIRECT --to-port 3130
or
iptables -t nat -A PREROUTING -p tcp --uid-owner {initiator uid} --dport 80 -j REDIRECT --to-port 3129
iptables -t nat -A PREROUTING -p tcp --uid-owner {initiator uid} --dport 443 -j REDIRECT --to-port 3130

AWS: Failed to connect to port XXXX: Connection refused

I am connected to an AWS server, where I want to host an Elasticsearch application. For that to work, I need to open a set of ports. In my AWS security group, I have opened the ones, which I consider as necessary. In order to check, whether that worked, I tried the following:
While connected to AWS via ssh, I typed curl localhost:3002, which outputs:
<html><body>You are being redirected.</body></html>
When I try the same over my local machine, i.e. curl http://ec2-xxxxx.eu-central-1.compute.amazonaws.com:3002, I receive:
curl: (7) Failed to connect to ec2-xxxxx.eu-central-1.compute.amazonaws.com port 3002: Connection refused
Does that mean, that the port 3002 is not open, or could there be another explanation?
Thank you for your help!
Edit:
The configuration in the security group looks as follows:
Ingoing:
80 TCP 0.0.0.0/0 launch-wizard-7
80 TCP ::/0 launch-wizard-7
22 TCP 0.0.0.0/0 launch-wizard-7
5000 TCP 0.0.0.0/0 launch-wizard-7
5000 TCP ::/0 launch-wizard-7
3002 TCP 0.0.0.0/0 launch-wizard-7
3002 TCP ::/0 launch-wizard-7
3000 TCP 0.0.0.0/0 launch-wizard-7
3000 TCP ::/0 launch-wizard-7
443 TCP 0.0.0.0/0 launch-wizard-7
443 TCP ::/0 launch-wizard-7
Outgoing:
All All 0.0.0.0/0 launch-wizard-7

Can't seem to open port 8787 or 3939 on an Ubuntu EC2 instance but 22 and 80 opens fine

I've read through this answer but for the life of me, I can't figure out this one out.
I have an Ubuntu 18 EC2 instance running RStudio Server and RStudio Connect, both using default configuration and listening on ports 8787 and 3939 respectively.
Here are my config files:
ubuntu#EC2:~$ cat /etc/rstudio/rserver.conf
# Server Configuration File
#
#
ubuntu#EC2:~$ sudo cat /etc/rstudio-connect/rstudio-connect.gcfg
; RStudio Connect configuration file
[Server]
; SenderEmail is an email address used by RStudio Connect to send outbound
; email. The system will not be able to send administrative email until this
; setting is configured.
;
; SenderEmail = account#company.com
SenderEmail =
; Address is a public URL for this RStudio Connect server. Must be configured
; to enable features like including links to your content in emails. If
; Connect is deployed behind an HTTP proxy, this should be the URL for Connect
; in terms of that proxy.
;
; Address = https://rstudio-connect.company.com
Address =
[HTTP]
; RStudio Connect will listen on this network address for HTTP connections.
Listen = :3939
[Authentication]
; Specifies the type of user authentication.
Provider = password
Here's what I've tried:
Created inbound rules for ports 8787, 3939 and all TCP ports in my security group.
Checked the Network ACL for the subnet the instance is on
Ensured that rstudio-server and rstudio-connect are running and listening on all interfaces and not just localhost
ubuntu#EC2:~$ netstat -ltpn
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8787 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp6 0 0 :::8787 :::* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 :::3939 :::* LISTEN -
Checked that ufw is inactive
ubuntu#EC2:~$ sudo ufw status
Status: inactive
Created an iptables rule for port 8787
ubuntu#EC2:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:8787
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
I still can't access port 8787 or 3939 externally. However I can access them both on the host using Lynx.
If I change RStudio Server's configuration to have it use port 80 instead, I can access it externally but it doesn't work for ports 8787 or 3939.
Any ideas why and how to fix this?
I just figured out the answer myself. There was absolutely nothing wrong with my configuration. Opening up all the TCP ports in my security group was a bit overkill maybe and entirely unnecessary, so don't do that.
The issue was that the corporate network I am connected to blocks outbound traffic to external hosts on certain non-standard ports.
If you're in the same boat as me and need to host 2 services on the same EC2 instance but don't know which ports are unavailable/blocked by your organization then you could use nmap and portquiz.net to figure it out.
nmap is a port scanner and portquiz.net is a service that listens for connections on all TCP ports. You could scan the host using nmap over a range of TCP ports you're interested in using and see which ports show up as open
nmap -v -p0-8000 portquiz.net
Starting Nmap 7.70 ( https://nmap.org ) at 2019-04-02 16:47 IST
Initiating Ping Scan at 16:47
Scanning portquiz.net (5.196.70.86) [2 ports]
Completed Ping Scan at 16:47, 0.13s elapsed (1 total hosts)
Initiating Parallel DNS resolution of 1 host. at 16:47
Completed Parallel DNS resolution of 1 host. at 16:47, 0.14s elapsed
Initiating Connect Scan at 16:47
Scanning portquiz.net (5.196.70.86) [8001 ports]
Discovered open port 22/tcp on 5.196.70.86
Discovered open port 80/tcp on 5.196.70.86
Discovered open port 443/tcp on 5.196.70.86
Discovered open port 21/tcp on 5.196.70.86
Discovered open port 4080/tcp on 5.196.70.86
Completed Connect Scan at 16:48, 84.98s elapsed (8001 total ports)
Nmap scan report for portquiz.net (5.196.70.86)
Host is up (0.13s latency).
rDNS record for 5.196.70.86: electron.positon.org
Not shown: 7996 filtered ports
PORT STATE SERVICE
21/tcp open ftp
22/tcp open ssh
80/tcp open http
443/tcp open https
4080/tcp open lorica-in
Here, I have 4080 and 80 open so that means the corporate firewall isn't blocking outbound traffic to these ports. After configuring RStudio Server and RStudio Connect to listen on ports 80 and 4080 respectively, I'm now able to access both services externally.

I want to connect my service

What happened
: It can't be connected to be made my service on Web browser.
What you expected to happen
: Connect to my service
How to reproduce it (as minimally and precisely as possible)
:
First, I made 'my-deploy.yaml' like this.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deploy-name
spec:
replicas: 3
template:
metadata:
labels:
app: my-deploy
spec:
containers:
- name: mycontainer
image: alicek106/composetest:balanced_web
ports:
- containerPort: 80
And then, I made 'my-service.yaml' like this
apiVersion: v1
kind: Service
metadata:
name: my-service-name
spec:
ports:
- name: my-deploy-svc
port: 8080
targetPort: 80
type: LoadBalancer
externalIPs:
- 104.196.161.33
selector:
app: my-deploy
So, I created the deployment and service,
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d
my-service-name LoadBalancer 10.106.31.254 104.196.161.33 8080:32508/TCP 5d
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
my-deploy-name 3 3 3 3 6d
and try to connect 104.196.161.33:8080 , 104.196.161.33:32508 on Chrome Browser. But It doesn't work.
What should I do?
Environment
:
Kubernetes version :
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration: VM on ubuntu 16.04.LTS
OS (e.g. from /etc/os-release): ubuntu 16.04.LTS
Kernel : Linux master 4.10.0-37-generic #41~16.04.1-Ubuntu SMP Fri Oct 6 22:42:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Install tools: Docker-CE v17.06
Others:
kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 6d v1.8.1
node1 Ready <none> 6d v1.8.1
node2 Ready <none> 6d v1.8.1
ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:ba:93:a2:f2
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
ens160 Link encap:Ethernet HWaddr 00:50:56:80:ab:14
inet addr:39.119.118.176 Bcast:39.119.118.255 Mask:255.255.255.128
inet6 addr: fe80::250:56ff:fe80:ab14/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9581474 errors:0 dropped:473 overruns:0 frame:0
TX packets:4928331 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1528509917 (1.5 GB) TX bytes:4020347835 (4.0 GB)
flannel.1 Link encap:Ethernet HWaddr c6:b5:ef:90:ea:8f
inet addr:10.244.0.0 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:184 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:44750027 errors:0 dropped:0 overruns:0 frame:0
TX packets:44750027 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:13786966091 (13.7 GB) TX bytes:13786966091 (13.7 GB)
※ P.S : Could you recommend the Web service example on docker & kubernetes to me?
In my case, external-IP was allocated by GCE automatically. It doesn't need to be set manually in yaml configuration. Thus, if you discovered that EXTERNAL-IP is status in output of command "kubectl get svc ${service-name}", it may means works what you intended.
(but I'm not sure whether specifying external-IP in configuration works..)
And as I know, service of LoadBalancer type works only in cloud integration, which supports such functionality.
PS. I guess you are trying to test contents in "Let's starting Docker" in Republic of Korea, if you so, kindly contact me by e-mail address or twitter :D
I can reply you directly because I'm author of that book.

Host-only VirtualBox vboxnet1 IP belongs to host

I'm trying to connect to a server running on a guest OS in VirtualBox. VirtualBox is configured to assign a static IP to the guest. However I'm pretty sure that the IP it assigns actually belongs to the host. If I have an nginx server running on the host then requests to the vboxnet1 IP are intercepted by the host serve and never reach the guest.
Both the host and guest are Debian.
Also, I get the same result with and without the VirtualBox DHCP server enabled.
Here's the VirtualBox network settings (can't embed images with <10 rep...sigh):
And the VM network settings:
I've also tried with different IP addresses for the host, no change.
ifconfig on host:
$ ifconfig
eth0 Link encap:Ethernet HWaddr 00:90:f5:e8:b0:e0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:183363 errors:0 dropped:0 overruns:0 frame:0
TX packets:183363 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:70022881 (70.0 MB) TX bytes:70022881 (70.0 MB)
vboxnet1 Link encap:Ethernet HWaddr 0a:00:27:00:00:01
inet addr:192.168.66.1 Bcast:192.168.66.255 Mask:255.255.255.0
inet6 addr: fe80::800:27ff:fe00:1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:2545 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:308371 (308.3 KB)
wlan0 Link encap:Ethernet HWaddr 24:fd:52:c0:1b:b1
inet addr:192.168.2.106 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::26fd:52ff:fec0:1bb1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:11724555 errors:0 dropped:0 overruns:0 frame:0
TX packets:7429276 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16204472393 (16.2 GB) TX bytes:1222715861 (1.2 GB)
iptables on host:
$ sudo iptables -L
[sudo] password for aidan:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Here's an nmap comparing the host IP and the 'guest' IP:
$ nmap 192.168.66.1
Starting Nmap 6.40 ( http://nmap.org ) at 2015-07-22 13:28 CEST
Nmap scan report for 192.168.66.1
Host is up (0.00015s latency).
Not shown: 995 closed ports
PORT STATE SERVICE
139/tcp open netbios-ssn
445/tcp open microsoft-ds
902/tcp open iss-realsecure
3128/tcp open squid-http
5050/tcp open mmcc
Nmap done: 1 IP address (1 host up) scanned in 0.16 seconds
$ nmap 192.168.2.106
Starting Nmap 6.40 ( http://nmap.org ) at 2015-07-22 13:28 CEST
Nmap scan report for 192.168.2.106
Host is up (0.00015s latency).
Not shown: 995 closed ports
PORT STATE SERVICE
139/tcp open netbios-ssn
445/tcp open microsoft-ds
902/tcp open iss-realsecure
3128/tcp open squid-http
5050/tcp open mmcc
Nmap done: 1 IP address (1 host up) scanned in 0.17 seconds