What happened
: It can't be connected to be made my service on Web browser.
What you expected to happen
: Connect to my service
How to reproduce it (as minimally and precisely as possible)
:
First, I made 'my-deploy.yaml' like this.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-deploy-name
spec:
replicas: 3
template:
metadata:
labels:
app: my-deploy
spec:
containers:
- name: mycontainer
image: alicek106/composetest:balanced_web
ports:
- containerPort: 80
And then, I made 'my-service.yaml' like this
apiVersion: v1
kind: Service
metadata:
name: my-service-name
spec:
ports:
- name: my-deploy-svc
port: 8080
targetPort: 80
type: LoadBalancer
externalIPs:
- 104.196.161.33
selector:
app: my-deploy
So, I created the deployment and service,
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d
my-service-name LoadBalancer 10.106.31.254 104.196.161.33 8080:32508/TCP 5d
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
my-deploy-name 3 3 3 3 6d
and try to connect 104.196.161.33:8080 , 104.196.161.33:32508 on Chrome Browser. But It doesn't work.
What should I do?
Environment
:
Kubernetes version :
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:27:35Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.1", GitCommit:"f38e43b221d08850172a9a4ea785a86a3ffa3b3a", GitTreeState:"clean", BuildDate:"2017-10-11T23:16:41Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration: VM on ubuntu 16.04.LTS
OS (e.g. from /etc/os-release): ubuntu 16.04.LTS
Kernel : Linux master 4.10.0-37-generic #41~16.04.1-Ubuntu SMP Fri Oct 6 22:42:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Install tools: Docker-CE v17.06
Others:
kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 6d v1.8.1
node1 Ready <none> 6d v1.8.1
node2 Ready <none> 6d v1.8.1
ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:ba:93:a2:f2
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
ens160 Link encap:Ethernet HWaddr 00:50:56:80:ab:14
inet addr:39.119.118.176 Bcast:39.119.118.255 Mask:255.255.255.128
inet6 addr: fe80::250:56ff:fe80:ab14/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:9581474 errors:0 dropped:473 overruns:0 frame:0
TX packets:4928331 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1528509917 (1.5 GB) TX bytes:4020347835 (4.0 GB)
flannel.1 Link encap:Ethernet HWaddr c6:b5:ef:90:ea:8f
inet addr:10.244.0.0 Bcast:0.0.0.0 Mask:255.255.255.255
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:184 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:44750027 errors:0 dropped:0 overruns:0 frame:0
TX packets:44750027 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:13786966091 (13.7 GB) TX bytes:13786966091 (13.7 GB)
※ P.S : Could you recommend the Web service example on docker & kubernetes to me?
In my case, external-IP was allocated by GCE automatically. It doesn't need to be set manually in yaml configuration. Thus, if you discovered that EXTERNAL-IP is status in output of command "kubectl get svc ${service-name}", it may means works what you intended.
(but I'm not sure whether specifying external-IP in configuration works..)
And as I know, service of LoadBalancer type works only in cloud integration, which supports such functionality.
PS. I guess you are trying to test contents in "Let's starting Docker" in Republic of Korea, if you so, kindly contact me by e-mail address or twitter :D
I can reply you directly because I'm author of that book.
Related
I deployed Istio in my RKE2 cluster using AWS EC2, and everything works fine with the istio-ingress service set as a nodeport, we can communicate with the application without any issue.
When I change the service from nodeport to loadbalancer the external IP address permanently stays in .
The RKE2 cluster is set to work with istio, but the cloud provider was never assigned because of internal policies.
This is my ingressgateway service
apiVersion: v1
kind: Service
metadata:
annotations:
field.cattle.io/publicEndpoints: "null"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: '"true"'
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: '"false"'
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-security-groups: '"sg-mysg"'
service.beta.kubernetes.io/aws-load-balancer-type: nlb
labels:
app: istio-ingressgateway
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.14.1
release: istio
name: istio-ingressgateway
namespace: istio-system
spec:
clusterIP: XX.XX.XX.XX
clusterIPs:
- XX.XX.XX.XX
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: status-port
nodePort: 30405
port: 15021
protocol: TCP
targetPort: 15021
- name: http2
nodePort: 31380
port: 80
protocol: TCP
targetPort: 8080
- name: https
nodePort: 31390
port: 443
protocol: TCP
targetPort: 8443
- name: tcp
nodePort: 31400
port: 31400
protocol: TCP
targetPort: 31400
- name: tls
nodePort: 32065
port: 15443
protocol: TCP
targetPort: 15443
selector:
app: istio-ingressgateway
istio: ingressgateway
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
and this is the current configuration of my NLB
For now the load balancer I use has only two ports set:
80 mapped to the target group pointing to TCP port 31380
443 mapped to the target group pointing to the TCP port 31390
I also tried target groups pointing to TCP port 8080 for port 80 and TCP port 8443 for port 443 without success
The security groups have all the ports used by istio unlocked for the CIDR and the VPC.
Any help is appreciated
I have a celery instance running inside a pod in local kubernetes cluster whereas the redis server/broker it connects to is started on my localhost:6379 without kubernetes . How can i get my k8 pod to talk to locally deployed redis?
You can create a Headless Service and an Endpoint with statically defined IP address of the node where the redis server is running.
I've created an example to illustrate you how it works.
First, I created a Headless Service and an Endpoint.
NOTE: Endpoint has the IP address of the node where redis server is running:
# example.yml
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: default
spec:
clusterIP: None
ports:
- name: redis
port: 6379
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
name: redis
namespace: default
subsets:
- addresses:
- ip: 10.156.0.58 # your node's IP address
ports:
- port: 6379
name: redis
protocol: TCP
After creating above resources, we are able to resolve the redis service name to the IP address:
# kubectl get svc,ep redis
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/redis ClusterIP None <none> 6379/TCP 28m
NAME ENDPOINTS AGE
endpoints/redis 10.156.0.58:6379 28m
# kubectl run dnsutils --image=gcr.io/kubernetes-e2e-test-images/dnsutils:1.3 -it --rm
/ # nslookup redis
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: redis.default.svc.cluster.local
Address: 10.156.0.58
Additionally, if your redis server is only listening on localhost, you need to modify the iptables rules. To configure port forwarding from port 6379 (default redis port) to localhost you can use:
NOTE: Instead of 10.156.0.58 use the IP address of the node where your redis server is running.
# iptables -t nat -A PREROUTING -p tcp -d 10.156.0.58 --dport 6379 -j DNAT --to-destination 127.0.0.1:6379
As you can see, it is easier if redis is listening not only on the localhost, as we don't have to modify the iptables rules then.
Finally, let's see if we can connect from Pod to the redis server on the host machine:
# kubectl exec -it redis-client -- bash
root#redis-client:/# redis-cli -h redis
redis:6379> SET key1 "value1"
OK
I am connected to an AWS server, where I want to host an Elasticsearch application. For that to work, I need to open a set of ports. In my AWS security group, I have opened the ones, which I consider as necessary. In order to check, whether that worked, I tried the following:
While connected to AWS via ssh, I typed curl localhost:3002, which outputs:
<html><body>You are being redirected.</body></html>
When I try the same over my local machine, i.e. curl http://ec2-xxxxx.eu-central-1.compute.amazonaws.com:3002, I receive:
curl: (7) Failed to connect to ec2-xxxxx.eu-central-1.compute.amazonaws.com port 3002: Connection refused
Does that mean, that the port 3002 is not open, or could there be another explanation?
Thank you for your help!
Edit:
The configuration in the security group looks as follows:
Ingoing:
80 TCP 0.0.0.0/0 launch-wizard-7
80 TCP ::/0 launch-wizard-7
22 TCP 0.0.0.0/0 launch-wizard-7
5000 TCP 0.0.0.0/0 launch-wizard-7
5000 TCP ::/0 launch-wizard-7
3002 TCP 0.0.0.0/0 launch-wizard-7
3002 TCP ::/0 launch-wizard-7
3000 TCP 0.0.0.0/0 launch-wizard-7
3000 TCP ::/0 launch-wizard-7
443 TCP 0.0.0.0/0 launch-wizard-7
443 TCP ::/0 launch-wizard-7
Outgoing:
All All 0.0.0.0/0 launch-wizard-7
So I couldnt figure out why I couldnt connect to my containers from a public IP until I found out what IP the docker ports were listening on...if you see ifconfig is showing 172...this is not valid within my vpc...you can see below im not using any 172 within my vpc...so..im not sure where this is getting this from...should I create a new subnet in a new vpn and just make an ami and launch it in new vpc with conforming subnet? can I change docker ip/port its listening on?
ifconfig
br-387bdd8b6fc4 Link encap:Ethernet HWaddr 02:42:69:A3:BA:A9
inet addr:172.18.0.1 Bcast:172.18.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:69ff:fea3:baa9/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:114269 errors:0 dropped:0 overruns:0 frame:0
TX packets:83675 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:11431231 (10.9 MiB) TX bytes:36504449 (34.8 MiB)
docker0 Link encap:Ethernet HWaddr 02:42:65:A6:7C:B3
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
eth0 Link encap:Ethernet HWaddr 02:77:F6:7A:50:A6
inet addr:10.0.140.193 Bcast:10.0.143.255 Mask:255.255.240.0
inet6 addr: fe80::77:f6ff:fe7a:50a6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:9001 Metric:1
RX packets:153720 errors:0 dropped:0 overruns:0 frame:0
TX packets:65773 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:209782581 (200.0 MiB) TX bytes:5618173 (5.3 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:30 errors:0 dropped:0 overruns:0 frame:0
TX packets:30 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2066 (2.0 KiB) TX bytes:2066 (2.0 KiB)
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 172.18.0.11 tcp dpt:389
ACCEPT tcp -- 0.0.0.0/0 172.18.0.13 tcp dpt:9043
ACCEPT tcp -- 0.0.0.0/0 172.18.0.13 tcp dpt:7777
ACCEPT tcp -- 0.0.0.0/0 172.18.0.3 tcp dpt:9443
ACCEPT tcp -- 0.0.0.0/0 172.18.0.7 tcp dpt:443
ACCEPT tcp -- 0.0.0.0/0 172.18.0.8 tcp dpt:443
ACCEPT tcp -- 0.0.0.0/0 172.18.0.9 tcp dpt:443
DockerSubnet1-Public 10.0.1.0/24
DockerSubnet2-Public 10.0.2.0/24
DockerSubnet3-Private 10.0.3.0/24
DockerSubnet4-Private 10.0.4.0/24
Private subnet 1A 10.0.0.0/19
Private subnet 2A 10.0.32.0/19
Public subnet 1 10.0.128.0/20
Public subnet 2 10.0.144.0/20
The standard way to use Docker networking is with the docker run -p command-line option. If you run:
docker run -p 8888:80 myimage
Docker will automatically set up a port forward from port 8888 on the host to port 80 in the container.
If your host has multiple interfaces (you hint at a "public IP", though it's not shown separately in your ifconfig output) you can set it to listen on only one of those by adding an IP address
docker run -p 10.0.140.193:8888:80 myimage
The Docker-internal 172.18.0.0/16 addresses are essentially useless. They're an important implementation detail when talking between containers, but Docker provides an internal DNS service that will resolve container names to internal IP addresses. In figuring out how to talk to a container from "outside", you don't need these IP addresses.
The terminology in your question hints strongly at Amazon Web Services. A common problem here is that your EC2 instance is running under a security group (network-level firewall) that isn't allowing the inbound connection.
I never did figure out why..i did allow all traffic inbound and outbound in the security groups and network acls etc etc..I made an ami out of my instance copied it over to another region with a newly built vpc and deployed there. It works!! Chalking it up to AWS VPC.
Thanks for the clarification of 172.x I did not know that was between the docker containers...makes sense now.
I'm trying to connect to a server running on a guest OS in VirtualBox. VirtualBox is configured to assign a static IP to the guest. However I'm pretty sure that the IP it assigns actually belongs to the host. If I have an nginx server running on the host then requests to the vboxnet1 IP are intercepted by the host serve and never reach the guest.
Both the host and guest are Debian.
Also, I get the same result with and without the VirtualBox DHCP server enabled.
Here's the VirtualBox network settings (can't embed images with <10 rep...sigh):
And the VM network settings:
I've also tried with different IP addresses for the host, no change.
ifconfig on host:
$ ifconfig
eth0 Link encap:Ethernet HWaddr 00:90:f5:e8:b0:e0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:183363 errors:0 dropped:0 overruns:0 frame:0
TX packets:183363 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:70022881 (70.0 MB) TX bytes:70022881 (70.0 MB)
vboxnet1 Link encap:Ethernet HWaddr 0a:00:27:00:00:01
inet addr:192.168.66.1 Bcast:192.168.66.255 Mask:255.255.255.0
inet6 addr: fe80::800:27ff:fe00:1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:2545 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:308371 (308.3 KB)
wlan0 Link encap:Ethernet HWaddr 24:fd:52:c0:1b:b1
inet addr:192.168.2.106 Bcast:192.168.2.255 Mask:255.255.255.0
inet6 addr: fe80::26fd:52ff:fec0:1bb1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:11724555 errors:0 dropped:0 overruns:0 frame:0
TX packets:7429276 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:16204472393 (16.2 GB) TX bytes:1222715861 (1.2 GB)
iptables on host:
$ sudo iptables -L
[sudo] password for aidan:
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Here's an nmap comparing the host IP and the 'guest' IP:
$ nmap 192.168.66.1
Starting Nmap 6.40 ( http://nmap.org ) at 2015-07-22 13:28 CEST
Nmap scan report for 192.168.66.1
Host is up (0.00015s latency).
Not shown: 995 closed ports
PORT STATE SERVICE
139/tcp open netbios-ssn
445/tcp open microsoft-ds
902/tcp open iss-realsecure
3128/tcp open squid-http
5050/tcp open mmcc
Nmap done: 1 IP address (1 host up) scanned in 0.16 seconds
$ nmap 192.168.2.106
Starting Nmap 6.40 ( http://nmap.org ) at 2015-07-22 13:28 CEST
Nmap scan report for 192.168.2.106
Host is up (0.00015s latency).
Not shown: 995 closed ports
PORT STATE SERVICE
139/tcp open netbios-ssn
445/tcp open microsoft-ds
902/tcp open iss-realsecure
3128/tcp open squid-http
5050/tcp open mmcc
Nmap done: 1 IP address (1 host up) scanned in 0.17 seconds