Accessing WSO2 IS-KM SOAP Endpoint in K8s cluster - wso2

I am deployed API Manager 3.1.0 and Identity Server as Key Manager version - 5.10.0 in kubernetes using GA release WSO2 docker images.
I had brought up 2 Active-Active instances of WSO2 API Manager and Identity Server as Key Manager and created cluster ip services to access those instances.
I had installed the NGINX Ingress Controller as NodePort service and exposed 443(https) port in order to access the cluster instances from outside(browser).
I has created the ingress rules for multiple host names to access the individual cluster ip services for api manager, api gateway and identity server
I had configured the hostnames in deployment.toml as the hostname given in ingress rules and transport.https.port as 443,so i am able to access the apimanager with url https://wso2apim/publisher because this url will land on ingress controller and internally controller will route to specfic clusterip service based on the provided hostname
Now I have to access the SOAP End Points of WSO2 Identity Server to configure some claims and for provisioning some default roles, when i will access the soap endpoint from outside the k8s cluster, I am able to access using ingress hostname eg. wso2is-km, but i want to access the soap end-point within the cluster, for that i have to use the clusterIP of identity server, I am able to access the cluster IP service, but the endpoint configured inside the SOAP WSDL file is ingress hostname on port 443, which i am not able to access within the cluster
Is there any way we can configure two different endpoints so that in SOAP WSDL we will have end point referring to clusterip service of identity server, or is there any way we can edit this generated wsdl for consumption...?

Related

Unable to access api hosted in EKS and routed via istio-ingressgateway with NLB

New to AWS here, Trying to expose API and having issues.
I have an API deployed to AWS's EKS cluster, where my API is connected to a clusterIP service.
That clusterIP service is attached to a Virtual Service(VS) exposing API's port and a fixed Hostname with routes as a prefix, this VS is connected to a Gateway (ingressgateway) describing both HTTP(80) and HTTPS(443) for all (* asterisk as hosts) connections.
Post that all our HTTP and HTTPS requests are mapped to 2 node ports under istio-ingressgateway hosted under the istio-system namespace.
Now, these 2 exposed nodePorts are consumed by the target groups registered over the same nodePorts, and these target groups are listened to by our NLB.
The NLB is connected to a DNS entry in Route53 by its NLB's DNS Name of CNAME(also tried with A) type.
Now, I am trying to access my API from the browser using the above setup, but whenever I try to (with A-type) I get 500: internal server error but no errors on my API's pods. And CNAME no results, it just gives timeout.
I followed the same process as how another API is deployed on the same cluster, but the other API is working fine, whereas my API is not accessible.
Edit 1: try capturing the error with CloudWatch, but we got the not-so-informative error:
{
"requestId": "e0etYh9BvHcES6A=",
"IP": "<ip-address>",
"requestTime": "16/Jan/2023:05:19:17 +0000",
"httpMethod": "GET",
"routeKey": "$default",
"status": "500",
"protocol": "HTTP/1.1",
"responseLength": "35"
}
Edit 2: was able to make it work.
Solution: as our NLB is configured with an internal scheme, we needed to connect it with the API gateway other than that the route53 record needed to be configured with an A scheme and once the changes were in place, we are able to access our API from browser.
Questions:
Is this the proper way to expose an API from the EKS cluster with an NLB and istio-ingressgateway service?
Are we only allowed to have one service routed via istio-ingressgateway under istio-system? Do we need to write a new one for another API?
Looks like the issue is related to the NLB being configured with an internal scheme and not being connected to the API Gateway. The Route 53 record need to be configured with an A scheme instead of CNAME.

WSO2 Api Manger 3.0.0 how to use HA routing of services

We have two API servers running in HA mode i.e. same set of services are running on both VMs with same environment. We would like to use WSO2 APIM for API Security but the problem is that we have not been able to find how to use HA routing services in WSO2 APIM.
E.g.
API Server 1- http://192.168.0.2/getCustomerDetails
API Server 2- http://192.168.0.3/getCustomerDetails
API Gateway- 192.168.0.10
Once registered on API Gateway the service endpoints become-
URL1- https://192.168.0.10:8243/getCustInfo1
[edit]
URL2- https://192.168.0.10:8243/getCustInfo2
Now the question is how does WSO2 APIM decides where to route the request i.e. URL1 or URL2 for accessing the same business service? Or there is some concept like virtual ip usage in WSO2 APIM?
You don't have to create 2 APIs in API Manager for your 2 backend URLs. Create a single API and use Load Balancing or Failover Endpoints[1].
[1] https://apim.docs.wso2.com/en/latest/Learn/DesignAPI/Endpoints/high-availability-for-endpoints/

How can I install SSL certificate to aws load balancer in kubernetes?

I got yaml file for specifying ssl certificate (provided by aws certificate manager)to load balancer for kubernetes deployment. But, we are running kubernetes cluster in aws china account where certification manager option is not available. Now if I have SSL certificate provided by Godaddy, how can I install it? Is any other alternative ways to install certificate rather than load balancer? Can I install it in my tomcat container itself and build new image with it?
As far as I know, you cannot setup an ELB deployed with a kubernetes Service to use a certificate which is NOT an ACM certificate. In fact, if you take a look at the possibile annotations here you'll see that the only annotation available to select a certificate is service.beta.kubernetes.io/aws-load-balancer-ssl-cert and the documentation for that annotation says the following:
ServiceAnnotationLoadBalancerCertificate is the annotation used on the
service to request a secure listener. Value is a valid certificate ARN.
For more, see http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/elb-listener-config.html
CertARN is an IAM or CM certificate ARN, e.g. arn:aws:acm:us-east-1:123456789012:certificate/12345678-1234-1234-1234-123456789012
As you ask, you can for sure terminate your ssl inside your kubernetes Pod and make the ELB a simple TCP proxy.
In order to do so, you need to add the following annotation to your Service manifest:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: 'tcp'
Also, you will need to forward both your http and https ports in order to handle http to https redirect correctly inside you pod.
If you need more specific help, please post you current manifest.

Kubernetes service unable to access s3 with Istio sidecar

Just wondering if anyone has any luck/solution when using AWS SDK to access AWS resource such as S3 when service injected with Istio sidecar.
As Istio's document points out:
traffic will go through Istio sidecar, you will need white list the DNS or IPs.
https is not available. Can only do by changing the format to something like "http://www.google.com:443"
However, AWS SDK handles the https connection hence I can't rewrite the URL. Subsequently, I'll get an "http: server gave HTTP response to HTTPS client" error.
Many thanks.

GCP IAP service doesn't find HTTPS proxy on load balancer

I'm trying to setup IAP with a HTTPS load balancer as per instructions here: https://cloud.google.com/iap/docs/load-balancer-howto
My backend is gke cluster that has a ingress on port 80 to access http web server.
Frontend is https with a valid certificate.
The traffic is routed without any issues from LB to web server through HTTPS FE, but when I want to enable IAP using command as below:
gcloud beta compute backend-services update k8s-be-30324--34c500f0e91c741a --iap=enabled --global
It returns the following output:
WARNING: IAP only protects requests that go through the Cloud Load Balancer. See the IAP documentation for important security best practices: https://cloud.google.com/iap/
WARNING: IAP has been enabled for a backend service that does not use HTTPS. Data sent from the Load Balancer to your VM will not be encrypted.
ERROR: (gcloud.beta.compute.backend-services.update) There was a problem modifying the resource:
- Invalid value for field 'resource.iap': ''. Backend service with IAP enabled requires at least one HTTPS proxy.
Any advice is appreciated! Thanks
So I figured out a workaround is to use the same LB that is created with ingress for kubernetes cluster instead using a custom one. Of course to avoid leaking unauthorized access FE for http must be removed from the LB.