I am new to Istio, We are planning to use Istio as SSL Service Mesh, For Kafka in a K8S environment.
I would like to check does Istio support Kafka Wire protocol communication.
Thanks for your help
The short answer is "currently no".
All searchs for an answer, eventually lead to open issues and discussions with a lot of different suggestions on Github:Kafka protocol filter.
Some information and overview on how may Kafka work with the service mesh, can also be found in Kafka and the service mesh presentation.
I will update my answer if I will find more useful info for you.
Related
This is my first time using GCP, and I'm trying to put my project on production, and I'm running into problems with getting websocket communication working. I've been googling around and I'm super unclear on if cloud run on GKE supports inbound/outbound websocket connections. The limitations docs say that cloud run fully managed does not work with inbound websockets, but does not say anything about cloud run on gke having issues with websockets.
I can post my ingress config and stuff, not really sure what exactly is relevant to this, but I've just followed their getting setup guide so everything is still set to the default for the most part.
The short answer is no. However, WebSockets do work outbound. This is a known issue on Cloud Run. You can use either just GKE or App Engine Flex as recommended alternatives.
The short answer, as of January 2021, is yes! You will need to use the beta api when deploying your service. Details are here: https://cloud.google.com/blog/products/serverless/cloud-run-gets-websockets-http-2-and-grpc-bidirectional-streams
I started to learn about the implementation of Istio upon multi-clusters recently. There are two ways, one is using one control plane to monitor multi-clusters, the other one is deploying a control plane in each cluster and let them communicate with each other.
If I understand the concept of service registry correctly, it is used for service discovery in Istio. Is there anyway to check or monitor which services are registered in the service registry?
You can use:
istioctl proxy-status
check https://istio.io/docs/reference/commands/istioctl/#istioctl-proxy-status for more info
We are using log-entries as driver on AWS ECS service for sending logs to our logentries account. We have configured AWS ECS service with required parameters like logentries-token but it's observed that after certain amount of time certain containers are not able to send logs to logentries.
Appreciate your help in advance, I am unable to find proper documentation for this on both logenries as well as AWS.
Thanks,
We had the same issue, so I started digging deeper than usual.
Actual driver implementation is quite simple.
The dragon is a dependency that does the socket, tls handling
There is a open issue and a PR to solve a very similar issue.
The PR is stale and I don't see chance for it to land, so I move away from logentries and recommend doing the same. Probably cloudwatch will be better.
I have an Active-Active Deployment of WSO2 API Manager. I don't know if I should enable Hazelcast Clustering, because:
A) On one hand, in the link of official documentation that I followed to deploy, Hazelcast doesn't appear.
B) On the other hand, this link of official documentation says that backend throttling limits will not be shared across the cluster when Hazelcast clustering is disabled (and I of course want that backend throttling limits are shared across the cluster!). But that link is under section "Distributed Deployment", and I haven't a "Distributed Deployment". As I said, I have a "Active-Active Deployment", so I don't know if I should follow that link and install Hazelcast.
If you need backend throttling, then you have to enable clustering in the nodes. Although it is mentioned under distributed deployment, for Active-Active deployment also needs clustering if you require backend service throttling.
The idea here is that two nodes serve the requests while they are in a cluster and enable backend service throttling.
if I should follow that link and install Hazelcast
You don't need to install anything, just enable the clustering and setup the IP addresses if wka membership scheme is used (please not many cloud providers or native docker don't support multicast)
The hazelcast cluster is used to broadcast the token invalidation messages and throttling limits. You don't need to enable the cluster at all, but then you may miss the messages between nodes.
I've been doing some server architecture design over the past few weeks and have run into an issue that I need outside help with. I'm creating a game server for a massively multiplayer game, so I need to receive constant updates on entity locations, then broadcast them out to relevant clients.
I've written servers with scale in mind before, but they were stateless servers, so it wasn't all that difficult. If I'm deploying this server on a cloud platform like Google Cloud or AWS, is it better to simple scale the instance that the server is running on, or should I opt for the reverse proxy method and deploy the server across multiple instances?
Sorry if this is a vague question. I can provide more details if necessary.
You may want to start here -
https://aws.amazon.com/gaming/
https://aws.amazon.com/gaming/game-server/
You also should consider messaging solutions such as SNS and SQS. If the app can receive push notifications then SNS might be your best option.