Connection issue when connect from outside to Pilot - istio

We have an issue while connecting from outside to pilot.
After a while the connection is closed and not restored.
When we restart the ingress, the connection works fine.
I suspect the issue with the Istio ingress, checked the CDS/LDS/EDS/RDS/STIOD.
My Istio version: 1.10.3.
istioctl proxy-status:
NAME CDS LDS EDS RDS ISTIOD VERSION
istio-egressgateway-78d75d8868-95znd.istio-system SYNCED SYNCED SYNCED NOT SENT istiod-7b7587d6fd-6rtf8 1.10.3
istio-ingressgateway-5c5499b774-kfz49.istio-system SYNCED SYNCED SYNCED SYNCED istiod-7b7587d6fd-6rtf8 1.10.3
What should be checked/changed to solve it?

Related

AWS EKS cluster with Istio sidecar auto inject problem and pod ext. db connection issue

I built a new cluster with Terraform for a AWS EKS, single node group with a single node.
This cluster is using 1.22 and cant seem to get anything to work correctly.
So Istio will install fine, i have installed versions 1.12.1, 1.13.2, 1.13.3 & 1.13.4 and all seem to have the same issue with auto injecting the sidecar.
Error creating: Internal error occurred: failed calling webhook "namespace.sidecar-injector.istio.io": failed to call webhook: Post "https://istiod.istio-system.svc:443/inject?timeout=10s": context deadline exceeded
But there are also other issues with the cluster, even without using Istio. My application is pulled and the pod will build fine but can not connect to the database. This is an external DB to the cluster - no other build (running on Azure) have any issues connecting to the DB
I am not sure if this is the issue with the application not connecting to the ext. DB but the sidecar issue could have something to do with BoundServiceAccountTokenVolume?
There is a warming about it being enabled on all clusters from 1.21 - a little odd as i have another applications with istio, running on another cluster with 1.21 on AWS EKS!
I also have this application running with istio without any issues in Azure on 1.22
I seem to have fix it :)
It seems to be a port issue with the security groups. I was letting terraform build its own group.
When I opened all the ports up in the 'inbound' section it seemed to work.
I then closed them all again and only opened 80 and 443 - which again stopped Istio from auto-injecting its sidecar
My app was requesting to talk to Istio on port 15017, so i opened just that port, along sided ports 80 and 443.
Once that port was opened, my app started to work and got the sidecar from Istio without any issue.
So it seems like the security group stops pod-to-pod communication... unless i have completely messed up my terraform build in some way

How to install/use AWS:ElastiCache Redis session for Yii2

The question is simple,
How to install/use AWS:ElastiCache Redis session for Yii2?
Self answered. Please read below
Steps to Install and Enable Redis
[Self Managed] Install Redis-Server in your webserver, using digitalocean link or for AWS EC2
[AWS ElastiCache] AWS ElastiCache for Redis: How to Use the AWS Redis Service
Install PHP Redis extension. AWS Guide
Apply session.save_path = Endpoint ElastiCache & session.save_handler = redis in php.ini
Restart the services
Confirm the change in php.ini by phpinfo();
Apply Redis session in Yii2 Component as said here
Make sure you follow this.
Use the same security group for EC2 and ElastiCache
Add Redis Port 6379 in AWS Security Group as 0.0.0.0
While configuration In Yii2, main.php add ElastiCache endpoint as host
These steps for the session only, so modify Yii2 main component if you want to use Redis as active data or to cache

WSO2 WSB restart service background data is refreshed,How to modify?

Every time the esb service is restarted, the logs previously running are refreshed. How do you keep the history?thank you
When we deploy the ESB in a VM it will not clear the wso2carbon logs with a server restart. Could you please elaborate on the server deployment. If you are using a container-based deployment you will need to mount the logs to an external directory to avoid this.

Get HTTP request logs from kubernetes pods ? (Running JupyterHub)

I am running JupyterHub application on a kubernetes cluster (specifically, managed kubernetes on aws, EKS). Each JupyterHub user has their own pod, when they spin up their JupyterHub notebook server.
I need to be able to monitor the HTTP requests that are being made from their notebook server.
Is there any way for me to enable this type of logging? And if so, how could I consume these logs?
With Istio service mesh you will be able to trace all incoming/outgoing HTTP requests within your JupyterHub pod.
Alternatively, you may use Zipkin - a distributed tracing system

WSO2 Kuberentes AWS deployment

Here is the issue I am encountering.
I am trying to deploy the WSO2 API Manager which is open source.
Can find the documenation on how to do this here:
https://github.com/wso2/kubernetes-artifacts/tree/master/wso2am
Dockerfiles:
https://github.com/wso2/dockerfiles/tree/master/wso2am
What I did was take the build the docker images which is required for kuberenetes.
I than take these docker images and deploy them to EC2 Container Service.
I than update the wso2 kuberenetes spec files (controllers) to use the image I pushed to EC2 Container Service.
I then go into kubernetes:
kubernetes-artifacts/wso2am and run "./deploy -d"
It than runs the wait for launch script but it just keeps looping and never "finds" that it is up.
root#aw-kubernetes:~/wso2kubernetes/kubernetes-artifacts/wso2am# ./deploy.sh -d
Deploying MySQL Governance DB Service...
service "mysql-govdb" created
Deploying MySQL Governance DB Replication Controller...
replicationcontroller "mysql-govdb" created
Deploying MySQL User DB Service...
service "mysql-userdb" created
Deploying MySQL User DB Replication Controller...
replicationcontroller "mysql-userdb" created
Deploying APIM database Service...
service "mysql-apim-db" created
Deploying APIM database Replication Controller...
replicationcontroller "mysql-apim-db" created
Deploying wso2am api-key-manager Service...
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32013,tcp:32014,tcp:32015) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "wso2am-api-key-manager" created
Deploying wso2am api-store Service...
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32018,tcp:32019) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "wso2am-api-store" created
Deploying wso2am api-publisher Service...
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32016,tcp:32017) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "wso2am-api-publisher" created
Deploying wso2am gateway-manager Service...
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32005,tcp:32006,tcp:32007,tcp:32008) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "wso2am-gateway-manager" created
Deploying wso2am api-key-manager Replication Controller...
replicationcontroller "wso2am-api-key-manager" created
Waiting wso2am to launch on http://172.20.0.30:32013
.......
I tried to comment out the "/wait-until-server-starts.sh" script and have it just start everything. But still not able to access the API Manager.
Could really use some insight on this as I am completely stuck.
I have tried everything I can think of.
If anyone on the WSO2 team or that has done this could help out it would really be appreciated.
My theory right now is maybe this was never tested deploying this to AWS but only to a local setup? but I could be wrong.
Any help would be greatly appreciated!
EDIT:
Adding some outputs from kubectl logs etc while it is in the loop waiting for server to come up I see these things:
root#aw-kubernetes:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-apim-db-b6b0u 1/1 Running 0 11m
mysql-govdb-0b0ud 1/1 Running 0 11m
mysql-userdb-fimc6 1/1 Running 0 11m
wso2am-api-key-manager-0pse8 1/1 Running 0 11m
Also doing a kubectl logs shows that everything started properly:
[2016-07-21 18:46:59,049] INFO - StartupFinalizerServiceComponent Server : WSO2 API Manager-1.10.0
[2016-07-21 18:46:59,049] INFO - StartupFinalizerServiceComponent WSO2 Carbon started in 34 sec
[2016-07-21 18:46:59,262] INFO - CarbonUIServiceComponent Mgt Console URL : https://wso2am-api-key-manager:32014/carbon/
[2016-07-21 18:46:59,262] INFO - CarbonUIServiceComponent API Publisher Default Context : http://wso2am-api-key-manager:32014/publisher
[2016-07-21 18:46:59,263] INFO - CarbonUIServiceComponent API Store Default Context : http://wso2am-api-key-manager:32014/store
#Alex This was an issue in WSO2 Kubernetes Artifacts v1.0.0 release. We have fixed this in the master branch [1].
The problem was that the deployment process was trying to verify WSO2 API-M server sockets using private IP addresses of the Kubernetes nodes. We updated the scripts to use the public/external IP address if they are available via the Kubernetes CLI. For this to work, you may need to setup Kubernetes on AWS according to [2].
[1] https://github.com/wso2/kubernetes-artifacts/commit/53cc6979965ebed8800b803bb3454f3b758b8c05
[2] http://kubernetes.io/docs/getting-started-guides/aws/