Connecting to on-prem kafka cluster from cloud AWS using Kerberos auth - amazon-web-services

Is it possible to connect to on-prem kafka cluster using Kerberos authentication from cloud deployed service.
When we are trying to connect we are getting below error:
Caused by: KrbException: Generic error (description in e-text) (60) - Unable to locate KDC for realm "ABC.COM"
this is my jaas config:
com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/pathtokeytab" principal="principal_name#ABC.COM" ;
Please help me if anyone has faced such error.
From this link under heading Network connectivity to Kerberos, they say its challenging to connect to on-prem kafka server from cloud deployed services. Is it unachievable or requires some other configs:
https://blog.cloudera.com/how-to-configure-clients-to-connect-to-apache-kafka-clusters-securely-part-1-kerberos/

Related

Migrate Connector activation failed (vSphere to GCP) javax.net.ssl.SSLHandshakeException

I'm trying to perform migration VM from vCenter to GCP.
Unfortunately, I've encounter with error on Migration Connector on vCenter:
**Migrate Connector activation failed** Error: Migrate Connector "projects/XXXXXX/locations/europe-west1/sources/migrate-vsphere/datacenterConnectors/migrate-vsphere-ptwcvj" for source "migrate-vsp here" connected to vCenter "XXX.XXX.XXX.XXX" failed due to an error when connecting to vCenter. Details:** HTTP transport error: javax.net.ssl.SSLHandshakeException: The server selected protocol version TLS10 is not accepted by client preferences [TLS12]**
Components:
VM Host
vCenter Server 5 Essentials
SSL.Version = All
config.vpxd.sso.solutionsUser.certification=/etc/vmware-vpx/ssl/rui.crt
config.vpxd.ssl.CAPath=/etc/ssl/certs
vSphere Client 5.1
VM
Windows Server 2019 Standard
Project in GCP
User Account with roles:
Create Service Account
Owner
Service Account Key Admin
VM Migration Administrator
Dedicated GCP service account created through Migration Connector VM interface.
M4C status:
`Appliance connectivity and health:
Migrate Connector appliance health: Healthy
Appliance version: 2.1.1943
Proxy setting: not enabled
DNS: Automatic
Gateway: 0.0.0.0
Cloud APIs network connection: Reachable
VM Migration service:
Registered with VM Migration service: True
Connectivity to VM Migration service: True
Project: XXXXXX
Location: europe-west1
Source name: migrate-vsphere
Data center connector name: migrate-vsphere-uajfqq
Migrate Connector service account: vsphere2gcp#XXXXXXX.iam.gserviceaccount.com
On-Prem environment:
Appliance not connected to vSphere`
Anyone has idea how to fix it or where to find solution?
Please help
If anyone has encounter with such problem, please tell me how I can solve this problem.
Please help

Error while using dataflow Kafka to bigquery template

I am using dataflow kafka to bigquery template. after launching the dataflow job, it stays in queue for some time then fails with below error:
Error occurred in the launcher container: Template launch failed. See console logs.
When looking at the logs, I see the following stack trace:
at org.apache.beam.runners.dataflow.DataflowRunner.run(DataflowRunner.java:192)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:317)
at org.apache.beam.sdk.Pipeline.run(Pipeline.java:303)
at com.google.cloud.teleport.v2.templates.KafkaToBigQuery.run(KafkaToBigQuery.java:343)
at com.google.cloud.teleport.v2.templates.KafkaToBigQuery.main(KafkaToBigQuery.java:222)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata –
While lauching job, i have provided below parameters:
kafka topic name
bootstrap server name
bigquery topic name
SA email
zone.
My kafka topic only contanis message: hello
kafka is installed in gcp instance which is in same zone and subnet as dataflow worker.
Adding this here as an answer for posterity:
"Timeout expired while fetching topic metadata" indicates that the the Kafka client is unable to connect to the broker(s) to fetch the metadata. This could be due to various reasons such as the worker VMs unable to talk to the broker (are you talking over public or private ips? Check incoming firewall settings if using public ips). It could also be due to an incorrect port or due to the broker requiring SSL connections. One way to confirm is to install the Kafka client on a GCE VM in the same subnet as Dataflow workers and then verify that the kafka client can connect to the Kafka brokers.
Refer to [1] to configure the ssl settings for the Kafka client (which you can test using the cli on a GCE instance). The team that manages the broker(s) can tell you whether they require SSL connection.
[1] https://docs.confluent.io/platform/current/kafka/authentication_ssl.html#clients
Hey Thanks guys for help , i was trying to access kafka with internal ip. it worked when i ched it to public ip. Actually i am running both kafka machines and workers in same subnet. so it should work with internal ip also... i am checking it now

Not able to access the REST services hosted on AWS EC2 to local WAMP server

I have deployed the Pimcore DAM app on Amazon EC2 instance. The Pimcore product offers REST services to integrate it with PHP which I am able to access successfully from the POSTMAN client but the same services are not working when I am trying to access it from my local WAMP server through cURL request. It's giving me the below error -
"cURL Error #:Failed to connect to 13.58.32.47 port 80: Timed out" [13.58.32.47 - Public IP of the EC2 instance]
Could you please help me in order to resolve this connectivity issue?
Inbound Rules - [1]: https://i.stack.imgur.com/JJIW8.png
Outbound Rules - [1]: https://i.stack.imgur.com/2ACYF.png
Thanks & Regards,
Anup

Can't connect to my web server on an AWS EC2 instance

I am new to aws and have very little networking knowledge.
I have set up an EC2 instance and installed sucessfully
MongoDB
my nodejs app server and
my angular web app on the same instance
I tried to access my web server from a browser using
https://ec2-54-255-239-55.ap-southeast-1.compute.amazonaws.com:3443/
and
ec2-54-255-239-55.ap-southeast-1.compute.amazonaws.com:3443/
but have not been successful so far. The error message :
This site can’t be reached
ec2-54-255-239-55.ap-southeast-1.compute.amazonaws.com refused to connect.
I need help

WSO2 Kuberentes AWS deployment

Here is the issue I am encountering.
I am trying to deploy the WSO2 API Manager which is open source.
Can find the documenation on how to do this here:
https://github.com/wso2/kubernetes-artifacts/tree/master/wso2am
Dockerfiles:
https://github.com/wso2/dockerfiles/tree/master/wso2am
What I did was take the build the docker images which is required for kuberenetes.
I than take these docker images and deploy them to EC2 Container Service.
I than update the wso2 kuberenetes spec files (controllers) to use the image I pushed to EC2 Container Service.
I then go into kubernetes:
kubernetes-artifacts/wso2am and run "./deploy -d"
It than runs the wait for launch script but it just keeps looping and never "finds" that it is up.
root#aw-kubernetes:~/wso2kubernetes/kubernetes-artifacts/wso2am# ./deploy.sh -d
Deploying MySQL Governance DB Service...
service "mysql-govdb" created
Deploying MySQL Governance DB Replication Controller...
replicationcontroller "mysql-govdb" created
Deploying MySQL User DB Service...
service "mysql-userdb" created
Deploying MySQL User DB Replication Controller...
replicationcontroller "mysql-userdb" created
Deploying APIM database Service...
service "mysql-apim-db" created
Deploying APIM database Replication Controller...
replicationcontroller "mysql-apim-db" created
Deploying wso2am api-key-manager Service...
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32013,tcp:32014,tcp:32015) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "wso2am-api-key-manager" created
Deploying wso2am api-store Service...
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32018,tcp:32019) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "wso2am-api-store" created
Deploying wso2am api-publisher Service...
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32016,tcp:32017) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "wso2am-api-publisher" created
Deploying wso2am gateway-manager Service...
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:32005,tcp:32006,tcp:32007,tcp:32008) to serve traffic.
See http://releases.k8s.io/release-1.3/docs/user-guide/services-firewalls.md for more details.
service "wso2am-gateway-manager" created
Deploying wso2am api-key-manager Replication Controller...
replicationcontroller "wso2am-api-key-manager" created
Waiting wso2am to launch on http://172.20.0.30:32013
.......
I tried to comment out the "/wait-until-server-starts.sh" script and have it just start everything. But still not able to access the API Manager.
Could really use some insight on this as I am completely stuck.
I have tried everything I can think of.
If anyone on the WSO2 team or that has done this could help out it would really be appreciated.
My theory right now is maybe this was never tested deploying this to AWS but only to a local setup? but I could be wrong.
Any help would be greatly appreciated!
EDIT:
Adding some outputs from kubectl logs etc while it is in the loop waiting for server to come up I see these things:
root#aw-kubernetes:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-apim-db-b6b0u 1/1 Running 0 11m
mysql-govdb-0b0ud 1/1 Running 0 11m
mysql-userdb-fimc6 1/1 Running 0 11m
wso2am-api-key-manager-0pse8 1/1 Running 0 11m
Also doing a kubectl logs shows that everything started properly:
[2016-07-21 18:46:59,049] INFO - StartupFinalizerServiceComponent Server : WSO2 API Manager-1.10.0
[2016-07-21 18:46:59,049] INFO - StartupFinalizerServiceComponent WSO2 Carbon started in 34 sec
[2016-07-21 18:46:59,262] INFO - CarbonUIServiceComponent Mgt Console URL : https://wso2am-api-key-manager:32014/carbon/
[2016-07-21 18:46:59,262] INFO - CarbonUIServiceComponent API Publisher Default Context : http://wso2am-api-key-manager:32014/publisher
[2016-07-21 18:46:59,263] INFO - CarbonUIServiceComponent API Store Default Context : http://wso2am-api-key-manager:32014/store
#Alex This was an issue in WSO2 Kubernetes Artifacts v1.0.0 release. We have fixed this in the master branch [1].
The problem was that the deployment process was trying to verify WSO2 API-M server sockets using private IP addresses of the Kubernetes nodes. We updated the scripts to use the public/external IP address if they are available via the Kubernetes CLI. For this to work, you may need to setup Kubernetes on AWS according to [2].
[1] https://github.com/wso2/kubernetes-artifacts/commit/53cc6979965ebed8800b803bb3454f3b758b8c05
[2] http://kubernetes.io/docs/getting-started-guides/aws/