The socket connect to any port success in istio sidecar pod - istio

In istio-proxy log shows the socket connection was reject
[2022-11-21T12:42:47.825Z] "- - -" 0 UF,URX - - "-" 0 0 10000 - "-" "-" "-" "-" "103.235.46.40:123" PassthroughCluster - 103.235.46.40:123 100.108.44.117:60480
But in the pod the python socket shows connection success
<socket.socket fd=3, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6, laddr=('100.108.44.117', 60480), raddr=('103.235.46.40', 123)>
And whatever the port is, the socket will shows connect success.
How to configure the istio let the socket has the correct behavior?
I try to set the istio sidecar outboundTrafficPolicy mode to REGISTRY_ONLY.
But the socket also shows connect to one hostname which is not configured in seviceentry with any port sucess.

Related

Kubernetes ingress on GKE results in 502 response on http / SSL_ERROR_SYSCALL on https

I've tested my configuration on minikube where it works perfectly, however on GKE I run into an error of HTTP responding with 502 while the HTTPS gets the connection terminated?
I have no idea how to diagnose this issue, which logs could I look at?
Here is a verbose curl log when accessing over https://
* Expire in 0 ms for 1 (transfer 0x1deb470)
* Expire in 0 ms for 1 (transfer 0x1deb470)
* Expire in 0 ms for 1 (transfer 0x1deb470)
* Trying 35.244.154.110...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x1deb470)
* Connected to chrischrisexample.de (35.244.154.110) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to chrischrisexample.de:443
* Closing connection 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to chrischrisexample.de:443
To solve it I had to:
Respond with a HTTP 200 on the health check (from the Google load balancer!)
Set a SSL certificate secret in the ingress (even if a self signed one)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Sync 14m (x20 over 157m) loadbalancer-controller Could not find TLS certificates. Continuing setup for the load balancer to serve HTTP. Note: this behavior is deprecated and will be removed in a future version of ingress-gce
Warning Translate 3m56s (x18 over 9m24s) loadbalancer-controller error while evaluating the ingress spec: could not find port "80" in service "default/app"; could not find port "80" in service "default/app"; could not find port "80" in service "default/app"; could not find port "80" in service "default/app"
These errors were shown on the kubectl describe ingress... Still doesn't make sense why it would error on the SSL handshake / connection though.

Which VCenter Server Applience 5.5 service should be running on 443/tcp port?

I get error 'Connection refused' when try to connect from vSphere Client and web client.
I check output of command netstat -tnpl and not see 443 port in listening ports.
Which VCenter Server Applience 5.5 service should be running on 443/tcp port?
I was able to start the service running on port 443. This service is vmware-vpxd:
$ netstat -tnpl | grep :443
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 4780/vpxd
tcp 0 0 :::443 :::* LISTEN 4780/vpxd
In my case, i got error when vpxd started: "vpxd failed to initialize"
The problem was solved updade VCenter Server Applience, as described in the article https://kb.vmware.com/s/article/2031331
A similar problem was found in the blog:
https://blog.robinfourdeux.com/vcenter-5-1b-waiting-for-vpxd-to-initialize-failed/

Error : curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused

I have a R code deployed on AWS server to fetch twitter data which is basically creating an API.
I want to fetch the data on local system using this API which is fetching the data using the function running on aws server.
I'm using this command :
$curl "http://127.0.0.1 ip-public ip:8000/meaning?woeid=23424848&n=1"
But i'm getting the error below :
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
curl: (6) Could not resolve host: ip-public ip

Redis Cluster 3.2.0 on EC2 t2.micro (no Elasticache)

After much ado, I've come to an impasse. I'm trying to setup a redis cluster of 3 master nodes and 3 slaves on a single t2.micro. My setup on my localhost works great but when I try to run it on EC2 I am encountering a strange problem where my client (on a separate t2.micro using ioredis) can seem to find and connect but then throws many errors repeatedly like "ioredis:connection error: Error: connect ECONNREFUSED" if I have my client in http. If I switch to https I get additional different timeout errors and "manually closed" errors (tried setting the TLS flag in cluster options to no avail).
TL;DR
Thoughts?? Why can I not create the cluster with the publicIP (rather than 127.0.0.1) using redis-trib? This would seem to solve my problems or is there something obvious that I'm missing here like a firewall?...
If you're reading this and struggling with redis, the list of points below could serve as a good summary of almost every proposed redis solution on the top pages of google and stackoverflow. Use them well!
After reading up on several similar topics I've found none of them have addressed the problem. Here's what I tried;
Checked my EC2 security groups to ensure the right ports were open between my redis t2.micro and client t2.micro. Ensured that redis ports+10,000 (for the bus) were also open.
Checked my AWS vpc, internet gateway, subnets and acls to ensure traffic could flow between the two instances
Ran some netstat and it looks like I can connect to the correct ports and that redis is listening on the right ports
Ensured in the redis.conf file for each node that the protected-mode (set to no), bind (commented out) and password fields (commented out) weren't inhibiting communication. At first this was part of the problem. At one point I turning all of them off and still ended up having the same errors.
I removed any old aof, dump.rdb, node.conf files and started fresh instances. I ensured each node had its own folder (no sharing of node.conf files).
I tried connecting the redis cluster using the loopback 127.0.0.1 like so:
./redis-trib.rb create --replicas 1 127.0.0.1:30010 127.0.0.1:30011 127.0.0.1:30012 127.0.0.1:30013 127.0.0.1:30014 127.0.0.1:30015
and still had errors from the client. So then tried the aws public host address of the redis t2.micro, then the public IP, and then the private IP. When I start the nodes (using ps -ef to ensure they are running in daemon mode) and then try ./redis-trib create --replicas 1 publicIP:30010 ..etc using the public IP it looks like it will create the cluster but then hangs at ">>>creating cluster" until it fails and says it cannot connect to the first node. It will not let me create the cluster with the publicIP instead of the 127.0.0.1 (which I suspect is the problem of why my client cannot connect). It seems like other people have had success connecting it but not in this case (I also tried to run redis-trib from my client and it would connect and generate the aof, and node.conf on the redis t2.micro but it would also hang and eventually say it couldn't find the nodes...)
Once I had the cluster up and running under the 127.0.0.1 the nodes would communicate and redis-cli returns PONG to my ping but to set a key it gives "(error) MOVED 16164 127.0.0.1:30012" and the same for 'get'. So I tried to manually set the publicIP by sending a "cluster meet" as in this example: redis-cli redirected to 127.0.0.1
Still no go. While I set the meet some of the 127.0.0.1 remained or the ones that I did set with the publicIP seemed to switch back by the time I'd finished running through all the nodes.
The only thing left to think is if AWS is blocking ports somewhere. I tried opening all ports to both t2.micro instances and opened them wide open to anyone and it still didn't work. I thought about looking into iptables on EC2 instances but they shouldn't be set given that there are security groups (and I haven't messed around with iptables much). I thought this was going to take me an hour and now I'm still sitting here scratching my head.
Some potentially useful code:
Cluster Code:
export var cluster = new Redis.Cluster([{
port: 30010,
host: '52.36.xxx.xxx'
}, {
port: 30011,
host: '52.36.xxx.xxx'
},{
port: 30012,
host: '52.36.xxx.xxx'
}]);
30010 nodes.conf
337e0c0152cc88590d73048a6f97120934d94da8 127.0.0.1:30010 myself,master - 0 0 1 connected 0-5460
8f7cf7a0016c372ebaaffd76b903e26e47f2a513 127.0.0.1:30014 slave 882fed6d144b6dea1531691deb323a3ae0b52936 0 1471601371978 5 connected
2c36b871bbdb6f8b98a2562ff315bf79ca524ec5 127.0.0.1:30013 slave 337e0c0152cc88590d73048a6f97120934d94da8 1471601372982 1471601368969 4 connected
265b166b7231a7c0a8017f4f7fad90261d59fb96 127.0.0.1:30015 slave 42e5b9b8ab9e1d2eefe1832e118085b4e44ae65d 0 1471601367966 6 connected
882fed6d144b6dea1531691deb323a3ae0b52936 127.0.0.1:30011 master - 0 1471601369972 2 connected 5461-10922
42e5b9b8ab9e1d2eefe1832e118085b4e44ae65d 127.0.0.1:30012 master - 0 1471601370977 3 connected 10923-16383
vars currentEpoch 6 lastVoteEpoch 0
127.0.0.1:30010> cluster nodes
337e0c0152cc88590d73048a6f97120934d94da8 127.0.0.1:30010 myself,master - 0 0 1 connected 0-5460
8f7cf7a0016c372ebaaffd76b903e26e47f2a513 127.0.0.1:30014 slave 882fed6d144b6dea1531691deb323a3ae0b52936 0 1471601610630 5 connected
2c36b871bbdb6f8b98a2562ff315bf79ca524ec5 127.0.0.1:30013 slave 337e0c0152cc88590d73048a6f97120934d94da8 0 1471601611632 4 connected
265b166b7231a7c0a8017f4f7fad90261d59fb96 127.0.0.1:30015 slave 42e5b9b8ab9e1d2eefe1832e118085b4e44ae65d 0 1471601609627 6 connected
882fed6d144b6dea1531691deb323a3ae0b52936 127.0.0.1:30011 master - 0 1471601612634 2 connected 5461-10922
42e5b9b8ab9e1d2eefe1832e118085b4e44ae65d 127.0.0.1:30012 master - 0 1471601607622 3 connected 10923-16383
Client errors : sudo DEBUG=ioredis:* node app.js
ioredis:redis status[127.0.0.1:30010]: close -> end +1ms
ioredis:redis status[127.0.0.1:30012]: wait -> connecting +0ms
ioredis:connection error: Error: connect ECONNREFUSED 127.0.0.1:30012 +0ms
ioredis:redis status[127.0.0.1:30012]: connecting -> close +0ms
ioredis:connection skip reconnecting because `retryStrategy` is not a function +0ms
ioredis:redis status[127.0.0.1:30012]: close -> end +0ms
ioredis:cluster status: connect -> close +0ms
ioredis:cluster status: close -> reconnecting +0ms
ioredis:delayqueue send 1 commands in failover queue +94ms
REDIS222 CONNECT error Error: Failed to refresh slots cache.
node error Error: timeout
at Object.exports.timeout (/home/ubuntu/main2/node_modules/ioredis/lib/utils/index.js:153:36)
at Cluster.getInfoFromNode (/home/ubuntu/main2/node_modules/ioredis/lib/cluster/index.js:552:32)
at tryNode (/home/ubuntu/main2/node_modules/ioredis/lib/cluster/index.js:347:11)
at Cluster.refreshSlotsCache (/home/ubuntu/main2/node_modules/ioredis/lib/cluster/index.js:362:3)
SSH in to redis t2.micro and netstat. Seems to be listening on correct ports (30010-30015
ubuntu#ip-xxx-xx-xx-xxx:~$ sudo netstat -ntlp | grep LISTEN
tcp 0 0 0.0.0.0:40013 0.0.0.0:* LISTEN 1328/redis-server *
tcp 0 0 0.0.0.0:40014 0.0.0.0:* LISTEN 1334/redis-server *
tcp 0 0 0.0.0.0:40015 0.0.0.0:* LISTEN 1336/redis-server *
tcp 0 0 0.0.0.0:30010 0.0.0.0:* LISTEN 1318/redis-server *
tcp 0 0 0.0.0.0:30011 0.0.0.0:* LISTEN 1322/redis-server *
tcp 0 0 0.0.0.0:30012 0.0.0.0:* LISTEN 1324/redis-server *
tcp 0 0 0.0.0.0:30013 0.0.0.0:* LISTEN 1328/redis-server *
tcp 0 0 0.0.0.0:30014 0.0.0.0:* LISTEN 1334/redis-server *
tcp 0 0 0.0.0.0:30015 0.0.0.0:* LISTEN 1336/redis-server *
tcp 0 0 0.0.0.0:40010 0.0.0.0:* LISTEN 1318/redis-server *
tcp 0 0 0.0.0.0:40011 0.0.0.0:* LISTEN
1322/redis-server *
tcp 0 0 0.0.0.0:40012 0.0.0.0:* LISTEN
SSH into the client t2.micro and remotely call cluster nodes from the redis remote server and it returns the correct loopback setup:
ubuntu#ip-xxx-xx-xx-x:~/redis-3.2.2/src$ ./redis-cli -h 52.36.237.185 -p 30010 cluster nodes
337e0c0152cc88590d73048a6f97120934d94da8 127.0.0.1:30010 myself,master - 0 0 1 connected 0-5460
8f7cf7a0016c372ebaaffd76b903e26e47f2a513 127.0.0.1:30014 slave 882fed6d144b6dea1531691deb323a3ae0b52936 0 1471629274223 5 connected
2c36b871bbdb6f8b98a2562ff315bf79ca524ec5 127.0.0.1:30013 slave 337e0c0152cc88590d73048a6f97120934d94da8 0 1471629275225 4 connected
265b166b7231a7c0a8017f4f7fad90261d59fb96 127.0.0.1:30015 slave 42e5b9b8ab9e1d2eefe1832e118085b4e44ae65d 0 1471629272217 6 connected
882fed6d144b6dea1531691deb323a3ae0b52936 127.0.0.1:30011 master - 0 1471629276228 2 connected 5461-10922
42e5b9b8ab9e1d2eefe1832e118085b4e44ae65d 127.0.0.1:30012 master - 0 1471629277231 3 connected 10923-16383
-------------------------------------------------------
Thoughts?? Why can I not create the cluster with the publicIP (rather than 127.0.0.1) using redis-trib? This would seem to solve my problems or is there something obvious that I'm missing here like a firewall...
................................
UPDATE
I ran redis-trib.rb check locally on the redis server and it showed everything is dandy:
ubuntu#ip-172-xx-xx-xxx:~/redis-3.2.2/src$ ./redis-trib.rb check 127.0.0.1:30010
>>> Performing Cluster Check (using node 127.0.0.1:30010)
...
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
But when I run it from my client on a different instance using the redis publicIP I get:
ubuntu#ip-172-xx-xx-x:~/redis-3.2.2/src$ ./redis-trib.rb check redispublicIP:30010
[ERR] Sorry, can't connect to node 127.0.0.1:30014
[ERR] Sorry, can't connect to node 127.0.0.1:30013
[ERR] Sorry, can't connect to node 127.0.0.1:30015
[ERR] Sorry, can't connect to node 127.0.0.1:30011
[ERR] Sorry, can't connect to node 127.0.0.1:30012
>>> Performing Cluster Check (using node redispublicIP:30010)
M: 337e0c0152cc88590d73048a6f97120934d94da8 redispublicIP:30010
slots:0-5460 (5461 slots) master
0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[ERR] Not all 16384 slots are covered by nodes.
So it does look like I need to switch that 127.0.0.1. It allows me to connect to a single node from the client if I use the publicIP:port but when it tries to find the other nodes it must be thinking they are local
Update2:
Seems like this is my problem but I've double checked and no passwords are set in any of the 6 redis.conf files:
Getting a connection error when using redis-trib.rb to create a cluster?
Update3: This article is very close but I do not understand his solution:
src/redis-trib.rb create 127.0.0.1:6379 127.0.0.1:6380 h2:p1 h2:p2 h3:p1 h3:p2
Specifically why hes declaring the host and ports after h2:p1 h2:p2 h3:p1 h3:p2
Update4:
It appears that this may be an issue with AWS t2.micro instances. I've sent a request to AWS Support:
https://forums.aws.amazon.com/thread.jspa?messageID=647509
SOLVED:
It was using the private IP address in both the client and the redis-trib create command. I had tried the private IP in the client configuration but mistakenly thought I had tried the redis-trib with it.
For anyone else: Lesson: use the private IP of the redis EC2 instance. Thanks to this video for helping me catch on:
https://www.youtube.com/watch?v=s4YpCA2Y_-Q
SOLVED:
It was using the private IP address in both the client and the redis-trib create command that solved the issue. I had tried the private IP in the client configuration but mistakenly thought I had tried the redis-trib with it.
For anyone else=> Lesson: use the private IP of the redis EC2 instance and not the public when joining the cluster with redis-trib. Thanks to this video for helping me catch on:
https://www.youtube.com/watch?v=s4YpCA2Y_-Q

Amazon AWS EC2 ports: connection refused

I have just created an EC2 instance on a brand new AWS account, behind a security group, and loaded some software on it. I am running Sinatra on the machine on port 4567 (currently), and have opened that port in my security group to whole world. Further, I am able to ssh into the EC2 instance, but I cannot connect on port 4567. I am using the public IP to connect:
shakuras:~ tyler$ curl **.***.**.***:22
SSH-2.0-OpenSSH_6.2p2 Ubuntu-6ubuntu0.1
curl: (56) Recv failure: Connection reset by peer
shakuras:~ tyler$ curl **.***.**.***:4567
curl: (7) Failed connect to **.***.**.***:4567; Connection refused
But my webserver is running, since I can see the site when I curl from localhost:
ubuntu#ip-172-31-8-160:~$ curl localhost:4567
Hello world! Welcome
I thought it might be the firewall but I ran iptables and got:
ubuntu#ip-172-31-8-160:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
I'm pretty lost on what is going on here. Why can't I connect from the outside world?
Are you sure that the web server is listening on other interfaces than localhost?
Check the output of
netstat -an | grep 4567
If it isn't listening on 0.0.0.0 then that is the cause.
This sounds like issue with the Sinatra binding. Could check this and this and even this link which talks about binding Sinatra to all IP addresses.
You are listening on 127.0.0.1 based on your netstat command. This is what the output should be something like this:
tcp 0 0 :::8080 :::* LISTEN
Can you post your Sinatra configs? What are you using to start it ?
This doesnot work on a simple Amazon AMI , with installation as shown in http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-install.html
Step 1 , 2, 3 works (agent installation and starting demon ) as shown
[ec2-user#ip-<ip> ~]$ curl http://localhost:51678/v1/metadata
curl: (7) Failed to connect to localhost port 51678: Connection refused
infact netstat shows some listening tcp ports but one able to connect , definitely not 51678 tcp .
If you're using Amazon EC2 and make sure that you have security rule in Custom TCP for 0.0.0.0 in security groups, and still can't connect; try adding 0.0.0.0 to first line of the /etc/hosts by
sudo nvim /etc/hosts
add space to the last ip on the first line, and it should look like
127.0.0.1 localhost 0.0.0.0