Issue with wso2 api manager permission for roles - wso2

I have two instances of wso2 api manager running on two different servers.Both of them are referring to same UM_DB . I created a role by logging with admin credentials on one server .After that i checked for the role on other server by logging with admin credentials again.I found that there was role existing on other server but permission that i provided for that role does not exist on another server.Is that a bug with wso2 api manager or I missed something in configuration..?

You want to deploy two APIM instances in a cluster. It is better to refer the APIM clustering guide to setup it properly. There are tow things you need to understand.. when your deploying APIM in cluster
You must point both instance in to same database. There are can be three logical databases i.e UM, Registry and AM database. These three can be an one physical DB. However must pointed to same by the both instance.
You must configure the Hazelcast based clustering using axis2.xml file. This is required because, APIM uses Hazelcast based implementation to distribute the data in the caches. Sometime, In your scenario, i guess you have not configured this. Therefore permission tree has not been distributed between two nodes. Therefore lot of data that is stored in the caches for high performance. therefore please make sure to configure this properly.
I guess this would help you.

Related

How an app deployed on GKE can deploy other app on same GCP project without authentication

I have a java application that is deployed on GKE cluster. Let's call it the "orchestrator"
The application should be able to deploy other applications on same GCP project where the "orchestrator" app is running (can be same GKE or different GKE cluster), using helm cli commands.
We were able to do that using Google Service Account authentication, where the JSON key is provided to the "orchestrator" and we could use it to generate tokens.
My question is.. since both theĀ "orchestrator" and the others apps are running on same GCP project (sometimes on same GKE cluster), is there a way to use some default credentials auto discovered by GCP, instead of generating and providing a Service Account JSON key to theĀ "orchestrator" app?
That way, the customer won't need to expose this Key to our system and the authentication will be happened behind the scenes, without our app intervention.
Is there something a GCP admin can do which make this use case work seamlessly?
I will elaborate on my comment.
When you are using a Service Account, you have to use keys to authenticate - Each service account is associated with a public/private RSA key pair. As you are working on GKE cluster, did you consider using Workload identity, like mentioned in Best practices for using and managing SA?
According to Best practices for using and managing service accounts all non-human accounts should be represented by Service Account:
Service accounts represent non-human users. They're intended for scenarios where a workload, such as a custom application, needs to access resources or perform actions without end-user involvement.
So in general, whenever you want to provide some permissions to applications, you should use Service Account.
In Types of keys for service accounts you can find information, that all Service Accounts needs RSA pair key:
Each service account is associated with a public/private RSA key pair. The Service Account Credentials API uses this internal key pair to create short-lived service account credentials, and to sign blobs and JSON Web Tokens (JWTs). This key pair is known as the Google-managed key pair.
In addition, you can create multiple public/private RSA key pairs, known as user-managed key pairs, and use the private key to authenticate with Google APIs. This private key is known as a service account key.
You could also think about Workload Identity, but I am not sure if this would fulfill your needs as there are still many unknowns about your environment.
Just as additional information, there was something called Basic Authentication which could be an option for you, but due to security reasons it's not supported since GKE 1.19. This was mentioned in another stack case: We have discouraged Basic authentication in Google Kubernetes Engine (GKE).
To sum up:
Best Practice to provide permissions for non-human accounts is to use Service Account. Each service account requires a pair of RSA Keys and you can create multiple keys.
Good Practice is also to use Workload Identity if you have this option, but due to lack of details it is hard to determine if this would work in your scenario.
Additional links:
Authenticating to the Kubernetes API server
Use the Default Service Account to access the API server
One way to achieve that is to use use default credentials approach mentioned here :
Finding credentials automatically. Instead of exposing the SA key to our App, the GCP admin can attach the same SA to the GKE cluster resource (see attached screenshot), and the default credentials mechanism will use that SA credentials to get access the APIs and resources (depends on the SA roles and permissions).

How would you access Google Secret Manager from an external environment?

I have googled quite heavily the last couple of hours to see if I could use Google Secret Manager from an external service like AWS Lambda or my local PC. I could not find anything helpful, or something that describes properly the steps to do so.
I do not want to play with the APIs and end up doing the authenticating via OAuth myself, I wish to use the client library. How would I go about doing so?
I have so far referred to the following links:
https://cloud.google.com/secret-manager/docs/configuring-secret-manager - Describes setting up secret manager, and prompts you to set up Google Cloud SDK.
https://cloud.google.com/sdk/docs/initializing - Describes setting up the cloud SDK (doesn't seem like I get some kind of config file that helps me to point my client library to the correct GCP project)
The issue I have is that it doesn't seem like I get access to some form of credential that I can use with the client library that consumes the secret manager service of a particular GCP project. Something like a service account token or a means of authenticating and consuming the service from an external environment.
Any help is appreciated, it just feels like I'm missing something. Or is it simply impossible to do so?
PS: Why am I using GCP secret manager when AWS offers a similar service? The latter is too expensive.
I think that your question applies to all GCP services, there isn't anything that is specific to Secret Manager.
As you mentioned, https://cloud.google.com/docs/authentication/getting-started documents how to create and use a Service Account. But this approach has the downside that now you need to figure out to store the service account key (yet another Secret!)
If you're planning to access GCP Secret Manager from AWS you can consider using: https://cloud.google.com/iam/docs/configuring-workload-identity-federation#aws which uses identity federation to map an AWS service account to a GCP service account, without the need to store an extra Secret somewhere.

How to share configurations of WSO2-IS among servers?

I have multiple wso2-is server set up as my dev, staging and prod environment.
And I would want to have a functionality wherein I can export all the configuration from some server ( say dev ) to some other server ( say staging ) to make both the server identical i.e both the server would have same database configurations, same tenants, same service providers and same identity providers and so on.
From the documentation here, I know that I can create service providers and identity providers using XML files, so in turn, I can share the XML files to sync SPs and IdPs between servers.
But is there a standard way to achieve that? Like, from the management console or so?
It even seems possible that syncing between [IS-HOME]/repository directory would ensure that the servers are identical, But are there any caveats for this approach?
There is no standard way to sync the service provider configurations among different environment, as of now. This issue is reported to track the feature requirement and its a work in progress at the moment and you can expect it in a future release.
One possible solution you can use to achieve your target is, retrieving the service provider from the Admin service and create the same service provider in other environment.
You can use the file based service provider configurations to achieve this target. But with that approach, you will not be able to see the service providers added from the configuration files in the management console. Next limitation you will face is you can add saml based inbound authentication configurations only through the config files ( such as OAuth 2.0 / OIDC inbound authentication configurations)
To answer your last question, you can't sync the [IS-HOME]/repository folder to achieve this. The reason you were able to observe this behavior seems to be you are using the inbuilt H2 database and its in [IS-HOME]/repository/database folder. With your file sync, you have actually synced the databases.

Why does Cloud Foundry allow creation of multiple service keys when they for provide the same secrets?

Try for example on Pivotal the Elefant SQL or Redis services: You can create multiple service keys with different names, but they contain the same secrets.
I would have thought that in creating different keys I would be able to revoke them independently. Is that only possible for some services?
The behavior depends on the service broker implementation. The recommended approach is to generate unique values for every binding and service key. Not all service brokers do that though.
For reference, see the open service broker API here -> https://github.com/openservicebrokerapi/servicebroker/blob/v2.13/spec.md#credentials
I know, for example, that the Pivotal MySQL Service Broker follows this advice and generates unique credentials for each binding and service key.
https://github.com/cloudfoundry/cf-mysql-broker
Hope that helps!

Registry Database in MultiTenant mode

In the multi-tenanted environment (create and work with tenants), Registry Database is required to share the information in this database between the Gateway and Key Manager components.
So in the above line Which information need to be shared and where it need to be shared?
There is no special DB configurations for Multi-tenancy, Tenant related information will be stored in same DBs. And when you are clustering APIM you need to share databases among the nodes irrespective of tenant usage. You can refer this guide for clustering.
What is basically says is you need to do step 8.c and 11 in Installing and configuring the databases section in API Manager clustering guide for gateway and keymanager nodes as well.