Adding Service Provider to WSO2 Identity Server via file is not working - wso2-identity-server
I want to configure within the WSO2 IDS a service provider which is available from the start. To do this I followed the following instructions: Adding a service provider
However when i boot the IDS and attempt to initiate a call to retrieve a token I get the following response:
{
"error_description": "A valid OAuth client could not be found for client_id: service-provider-fuga",
"error": "invalid_client"
}
and the log within the terminal of WSO2 IDS shows the following:
[2021-08-05 14:06:55,111] [0d5f9d6c-5f87-4dc3-a87f-cb473cd4127c] DEBUG {org.wso2.carbon.identity.oauth2.OAuth2Service} - Error while finding application state for application with client_id: 1ou1fLDyFA9BEqywVtrR6vAxc48a org.wso2.carbon.identity.oauth.common.exception.InvalidOAuthClientException: Cannot find an application associated with the given consumer key : 1ou1fLDyFA9BEqywVtrR6vAxc48a
at org.wso2.carbon.identity.oauth.dao.OAuthAppDAO.handleRequestForANonExistingConsumerKey(OAuthAppDAO.java:1154)
at org.wso2.carbon.identity.oauth.dao.OAuthAppDAO.getAppInformation(OAuthAppDAO.java:354)
at org.wso2.carbon.identity.oauth2.util.OAuth2Util.getAppInformationByClientId(OAuth2Util.java:1887)
The request I initiated is as follows: https://localhost:9443/oauth2/token?grant_type=password&client_id=service-provider-fuga&client_secret=...&username=user&password=...
The service provider file which is put in /home/wso2carbon/wso2-config-volume/repository/conf/identity/service-providers/service-provider.xml is as follows:
<?xml version="1.0" encoding="UTF-8"?>
<ServiceProvider>
<ApplicationName>service-provider-fuga</ApplicationName>
<Description>Service Provider configuration for FUGA</Description>
<JwksUri/>
<InboundAuthenticationConfig>
<InboundAuthenticationRequestConfigs>
<InboundAuthenticationRequestConfig>
<InboundAuthKey>1ou1fLDyFA9BEqywVtrR6vAxc48a</InboundAuthKey>
<InboundAuthType>oauth2</InboundAuthType>
<InboundConfigType>standardAPP</InboundConfigType>
<inboundConfiguration><![CDATA[<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<oAuthAppDO>
<oauthConsumerKey>1ou1fLDyFA9BEqywVtrR6vAxc48a</oauthConsumerKey>
<oauthConsumerSecret>...</oauthConsumerSecret>
<applicationName>service-provider-fuga</applicationName>
<callbackUrl></callbackUrl>
<oauthVersion>OAuth-2.0</oauthVersion>
<grantTypes>refresh_token password </grantTypes>
<scopeValidators/>
<pkceSupportPlain>true</pkceSupportPlain>
<pkceMandatory>false</pkceMandatory>
<state>ACTIVE</state>
<userAccessTokenExpiryTime>3600</userAccessTokenExpiryTime>
<applicationAccessTokenExpiryTime>3600</applicationAccessTokenExpiryTime>
<refreshTokenExpiryTime>86400</refreshTokenExpiryTime>
<idTokenExpiryTime>3600</idTokenExpiryTime>
<audiences/>
<bypassClientCredentials>true</bypassClientCredentials>
<renewRefreshTokenEnabled>true</renewRefreshTokenEnabled>
<requestObjectSignatureValidationEnabled>false</requestObjectSignatureValidationEnabled>
<idTokenEncryptionEnabled>false</idTokenEncryptionEnabled>
<idTokenEncryptionAlgorithm>null</idTokenEncryptionAlgorithm>
<idTokenEncryptionMethod>null</idTokenEncryptionMethod>
<tokenType>JWT</tokenType>
</oAuthAppDO>
]]></inboundConfiguration>
<Properties/>
</InboundAuthenticationRequestConfig>
</InboundAuthenticationRequestConfigs>
</InboundAuthenticationConfig>
<LocalAndOutBoundAuthenticationConfig>
<AuthenticationSteps>
<AuthenticationStep>
<StepOrder>1</StepOrder>
<LocalAuthenticatorConfigs>
<LocalAuthenticatorConfig>
<Name>FugaAuthenticator</Name>
<DisplayName>FUGA Authenticator</DisplayName>
<IsEnabled>true</IsEnabled>
<Properties/>
</LocalAuthenticatorConfig>
</LocalAuthenticatorConfigs>
<FederatedIdentityProviders/>
<SubjectStep>false</SubjectStep>
<AttributeStep>false</AttributeStep>
</AuthenticationStep>
</AuthenticationSteps>
<AuthenticationType>local</AuthenticationType>
<alwaysSendBackAuthenticatedListOfIdPs>false</alwaysSendBackAuthenticatedListOfIdPs>
<UseTenantDomainInUsername>false</UseTenantDomainInUsername>
<UseUserstoreDomainInRoles>true</UseUserstoreDomainInRoles>
<UseUserstoreDomainInUsername>false</UseUserstoreDomainInUsername>
<SkipConsent>false</SkipConsent>
<skipLogoutConsent>false</skipLogoutConsent>
<EnableAuthorization>false</EnableAuthorization>
</LocalAndOutBoundAuthenticationConfig>
<RequestPathAuthenticatorConfigs/>
<InboundProvisioningConfig>
<ProvisioningUserStore/>
<IsProvisioningEnabled>false</IsProvisioningEnabled>
<IsDumbModeEnabled>false</IsDumbModeEnabled>
</InboundProvisioningConfig>
<OutboundProvisioningConfig>
<ProvisioningIdentityProviders/>
</OutboundProvisioningConfig>
<ClaimConfig>
<RoleClaimURI/>
<LocalClaimDialect>true</LocalClaimDialect>
<IdpClaim/>
<ClaimMappings/>
<AlwaysSendMappedLocalSubjectId>false</AlwaysSendMappedLocalSubjectId>
<SPClaimDialects/>
</ClaimConfig>
<PermissionAndRoleConfig>
<Permissions/>
<RoleMappings/>
<IdpRoles/>
</PermissionAndRoleConfig>
<IsSaaSApp>true</IsSaaSApp>
<ImageUrl/>
<AccessUrl/>
<IsDiscoverable>true</IsDiscoverable>
</ServiceProvider>
When I attempt to upload the file manually via the management console of WSO2 IDS is get an error that the application already exists.
When I boot the IDS without the service provider flow and upload it manually the authentication request is working.
The version of WSO2 IDS on which this occurs is 5.10.
WSO2 IS does not support adding the OAuth application configuration through file inside /repository/conf/identity/service-providers/. Because for the oAuth application, we need the entries in the database to manage the tokens issued for the applications. So file-based storage will not work for OAuth applications.
When I attempt to upload the file manually via the management console
of WSO2 IDS is get an error that the application already exists.
This is kind of expected, even though WSO2 IS does not support OAuth applications from file-based configuration. Having the file in /repository/conf/identity/service-providers/ will be considered as an application in the system (because WSO2 IS support multiple inbound protocols for same application - SAML or OAuth)
Answering my own question with the approach we took. Might be of benefit to others.
Since we are deploying the WSO2 identity server with Helm into a kubernetes environment we decided to create a job which inserts the service provider via the WSO2 management API. The created job look as follows:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-identityserver.service-provider-fuga
labels:
system: identity-service
spec:
template:
spec:
restartPolicy: OnFailure
initContainers:
- name: wait-for-ids
image: groundnuty/k8s-wait-for:v1.3
args:
- "pod"
- "-ldeployment=identityserver"
containers:
- name: import-service-provider-fuga
image: curlimages/curl:7.72.0
args:
- /bin/sh
- -ec
- "curl --location --request POST 'http://{{ .Release.Name }}-identityserver-service:9763/api/server/v1/applications/import' --header 'Authorization: Basic YWRtaW46c3VwZXJTZWNyZXQ=' --form 'file=#\"/service-provider.xml\"'"
volumeMounts:
- name: identity-server-conf
mountPath: /service-provider.xml
subPath: service-provider.xml
volumes:
- name: identity-server-conf
configMap:
name: {{ .Release.Name }}-identityserver.cm
The wait-for-ids container makes the job wait until all IDS pods are running. The central part of the job calls the IDS management API to import the service provider. The service provider xml file is stored within a configuration map.
We have eventually added the service provider via a kubernetes job after the IDS has been booted. The job looks as follows:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-identityserver.service-provider-fuga
labels:
system: identity-service
spec:
template:
spec:
restartPolicy: OnFailure
initContainers:
- name: wait-for-ids
image: groundnuty/k8s-wait-for:v1.3
args:
- "pod"
- "-ldeployment=identityserver"
containers:
- name: import-service-provider-fuga
image: curlimages/curl:7.72.0
args:
- /bin/sh
- -ec
- "curl --location --request POST 'http://{{ .Release.Name }}-identityserver-service:9763/api/server/v1/applications/import' --header 'Authorization: Basic {{ (printf "%s:%s" .Values.identityserver.management.admin.user .Values.identityserver.management.admin.password) | b64enc }}' --form 'file=#\"/service-provider-fuga.xml\"'"
volumeMounts:
- name: identity-server-conf
mountPath: /service-provider-fuga.xml
subPath: service-provider-fuga.xml
volumes:
- name: identity-server-conf
configMap:
name: {{ .Release.Name }}-identityserver.cm
The configuration of the service provider is shared via a configuration map.
Related
Traefik Apply Middleware On Domain Names
There's a need to apply a request body size limit to certain domain names via Traefik. The Traefik middleware is: apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: request-limits spec: buffering: maxRequestBodyBytes: 10485760 retryExpression: "IsNetworkError() && Attempts() < 2" And this can be applied globally via: additionalArguments: - --entrypoints.websecure.http.middlewares=traefik-request-limits#kubernetescrd How can this middleware be applied to certain domain names? What I've tried - remove the above additionalArguments, and replace it with: http: routers: http-specific: rule: "HostRegexp(`{name:(.*-)(service|mock|proxy)\\.(.*)\\.(example\\.com}`)" entrypoints: - websecure middlewares: - request-limits service: - noop#internal However - the above route is not getting created. Any tips or pointers would be much appreciated. Traefik Feature Status List item https://github.com/traefik/traefik/issues/5098
Redis deployed in AWS - Connection time out from localhost SpringBoot app
Small question regarding Redis deployed in AWS (not AWS Elastic Cache) and an issue connecting to it. Here is the setup of the Redis deployed in AWS: (pasting only the Kubernetes StatefulSet and Service) apiVersion: apps/v1 kind: StatefulSet metadata: name: redis spec: serviceName: redis replicas: 3 selector: matchLabels: app: redis template: metadata: labels: app: redis spec: initContainers: - name: config image: redis:7.0.5-alpine command: [ "sh", "-c" ] args: - | cp /tmp/redis/redis.conf /etc/redis/redis.conf echo "finding master..." MASTER_FDQN=`hostname -f | sed -e 's/redis-[0-9]\./redis-0./'` if [ "$(redis-cli -h sentinel -p 5000 ping)" != "PONG" ]; then echo "master not found, defaulting to redis-0" if [ "$(hostname)" = "redis-0" ]; then echo "this is redis-0, not updating config..." else echo "updating redis.conf..." echo "slaveof $MASTER_FDQN 6379" >> /etc/redis/redis.conf fi else echo "sentinel found, finding master" MASTER="$(redis-cli -h sentinel -p 5000 sentinel get-master-addr-by-name mymaster | grep -E '(^redis-\d{1,})|([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})')" echo "master found : $MASTER, updating redis.conf" echo "slaveof $MASTER 6379" >> /etc/redis/redis.conf fi volumeMounts: - name: redis-config mountPath: /etc/redis/ - name: config mountPath: /tmp/redis/ containers: - name: redis image: redis:7.0.5-alpine command: ["redis-server"] args: ["/etc/redis/redis.conf"] ports: - containerPort: 6379 name: redis volumeMounts: - name: data mountPath: /data - name: redis-config mountPath: /etc/redis/ volumes: - name: redis-config emptyDir: {} - name: config configMap: name: redis-config volumeClaimTemplates: - metadata: name: data spec: accessModes: [ "ReadWriteOnce" ] storageClassName: nfs-1 resources: requests: storage: 50Mi --- apiVersion: v1 kind: Service metadata: name: redis spec: ports: - port: 6379 targetPort: 6379 name: redis selector: app: redis type: LoadBalancer The pods are healthy, I can exec into it and perform operations fine. Here is the get all: NAME READY STATUS RESTARTS AGE pod/redis-0 1/1 Running 0 22h pod/redis-1 1/1 Running 0 22h pod/redis-2 1/1 Running 0 22h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/redis LoadBalancer 192.168.45.55 10.51.5.2 6379:30315/TCP 26h NAME READY AGE statefulset.apps/redis 3/3 22h Here is the describe of the service: Name: redis Namespace: Namespace Labels: <none> Annotations: <none> Selector: app=redis Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 192.168.22.33 IPs: 192.168.22.33 LoadBalancer Ingress: 10.51.5.2 Port: redis 6379/TCP TargetPort: 6379/TCP NodePort: redis 30315/TCP Endpoints: 192.xxx:6379,192.xxx:6379,192.xxx:6379 Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal IPAllocated 68s metallb-controller Assigned IP ["10.51.5.2"] Normal nodeAssigned 58s (x5 over 66s) metallb-speaker announcing from node "someaddress.com" with protocol "bgp" Normal nodeAssigned 58s (x5 over 66s) metallb-speaker announcing from node "someaddress.com" with protocol "bgp" I then try to connect to it, i.e. inserting some data with a very straightforward Spring Boot application. The application has no business logic, just trying to insert data. Here are the relevant parts: #Configuration public class RedisConfiguration { #Bean public ReactiveRedisConnectionFactory reactiveRedisConnectionFactory() { return new LettuceConnectionFactory("10.51.5.2", 30315); } #Repository public class RedisRepository { private final ReactiveRedisOperations<String, String> reactiveRedisOperations; public RedisRepository(ReactiveRedisOperations<String, String> reactiveRedisOperations) { this.reactiveRedisOperations = reactiveRedisOperations; } public Mono<RedisPojo> save(RedisPojo redisPojo) { return reactiveRedisOperations.opsForValue().set(redisPojo.getInput(), redisPojo.getOutput()).map(__ -> redisPojo); } Each time I am trying to write the data, I am getting this exception: 2022-12-02T20:20:08.015+08:00 ERROR 1184 --- [ctor-http-nio-3] a.w.r.e.AbstractErrorWebExceptionHandler : [8f16a752-1] 500 Server Error for HTTP POST "/save" org.springframework.data.redis.RedisConnectionFailureException: Unable to connect to Redis at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$ExceptionTranslatingConnectionProvider.translateException(LettuceConnectionFactory.java:1602) ~[spring-data-redis-3.0.0.jar:3.0.0] Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: Error has been observed at the following site(s): *__checkpoint ⇢ Handler com.redis.controller.RedisController#test(RedisRequest) [DispatcherHandler] *__checkpoint ⇢ HTTP POST "/save" [ExceptionHandlingWebHandler] Original Stack Trace: at org.springframework.data.redis.connection.lettuce.LettuceConnectionFactory$ExceptionTranslatingConnectionProvider.translateException(LettuceConnectionFactory.java:1602) ~[spring-data-redis-3.0.0.jar:3.0.0] Caused by: io.lettuce.core.RedisConnectionException: Unable to connect to 10.51.5.2/<unresolved>:30315 at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:78) ~[lettuce-core-6.2.1.RELEASE.jar:6.2.1.RELEASE] at io.lettuce.core.RedisConnectionException.create(RedisConnectionException.java:56) ~[lettuce-core-6.2.1.RELEASE.jar:6.2.1.RELEASE] at io.lettuce.core.AbstractRedisClient.getConnection(AbstractRedisClient.java:350) ~[lettuce-core-6.2.1.RELEASE.jar:6.2.1.RELEASE] at io.lettuce.core.RedisClient.connect(RedisClient.java:216) ~[lettuce-core-6.2.1.RELEASE.jar:6.2.1.RELEASE] Caused by: io.netty.channel.ConnectTimeoutException: connection timed out: /10.51.5.2:30315 at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:261) ~[netty-transport-4.1.85.Final.jar:4.1.85.Final] at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98) ~[netty-common-4.1.85.Final.jar:4.1.85.Final] This is particularly puzzling, because I am quite sure the code of the Spring Boot app is working. When I change the IP of return new LettuceConnectionFactory("10.51.5.2", 30315);: to a regular Redis on my laptop ("localhost", 6379), a dockerized Redis on my laptop, a dockerized Redis on prem, all are working fine. Therefore, I am quite puzzled what did I do wrong with the setup of this Redis in AWS. What should I do in order to connect to it properly. May I get some help please? Thank you
By default, Redis binds itself to the IP addresses 127.0.0.1 and ::1 and does not accept connections against non-local interfaces. Chances are high that this is your main issue and you may want to review your redis.conf file to bind Redis to the interface you need or to the generic * -::*, as explained in the comments of the config file itself (which I have linked above). With that being said, Redis also does not accept connections on non-local interfaces if the default user has no password - a security layer named Protected mode. Thus you should either give your default user a password or disable protected mode in your redis.conf file. Not sure if this applies to your case but, as a side note, I would suggest to always avoid exposing Redis to the Internet.
You are mixing 2 things. To enable this service for pods in different namespaces you do not need external load balancer, you can just try to use redis.namespace-name:6379 dns name and it will just work. Such dns is there for every service you create (but works only inside kubernetes) Kubernetes will make sure that your traffic will be routed to proper pods (assuming there is more than one). If you want to expose redis from outside of kubernetes then you need to make sure there is connectivity from the outside and then you need network load balancer that will forward traffic to your kubernetes service (in your case node port, so you need NLB with eks worker nodes: 30315 as a targets) If your worker nodes have public IP and their SecurityGroups allow connecting to them directly, you could try to connect to worker node's IP directly just to test things out (without LB). And regardless off yout setup you can always create proxy via kubectl kubectl port-forward -n redisNS svc/redis 6379:6379 and connect from spring boot app to localhost:6379 How do you want to connect from app to redis in a final setup?
Mounting AWS Secrets Manager on Kubernetes/Helm chart
I have created an apps cluster deployment on AWS EKS that is deployed using Helm. For proper operation of my app, I need to set env variables, which are secrets stored in AWS Secrets manager. Referencing a tutorial, I set up my values in values.yaml file someway like this secretsData: secretName: aws-secrets providerName: aws objectName: CodeBuild Now I have created a secrets provider class as AWS recommends: secret-provider.yaml apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: aws-secret-provider-class spec: provider: {{ .Values.secretsData.providerName }} parameters: objects: | - objectName: "{{ .Values.secretsData.objectName }}" objectType: "secretsmanager" jmesPath: - path: SP1_DB_HOST objectAlias: SP1_DB_HOST - path: SP1_DB_USER objectAlias: SP1_DB_USER - path: SP1_DB_PASSWORD objectAlias: SP1_DB_PASSWORD - path: SP1_DB_PATH objectAlias: SP1_DB_PATH secretObjects: - secretName: {{ .Values.secretsData.secretName }} type: Opaque data: - objectName: SP1_DB_HOST key: SP1_DB_HOST - objectName: SP1_DB_USER key: SP1_DB_USER - objectName: SP1_DB_PASSWORD key: SP1_DB_PASSWORD - objectName: SP1_DB_PATH key: SP1_DB_PATH I mount this secret object in my deployment.yaml, the relevant section of the file looks like this: volumeMounts: - name: secrets-store-volume mountPath: "/mnt/secrets" readOnly: true env: - name: SP1_DB_HOST valueFrom: secretKeyRef: name: {{ .Values.secretsData.secretName }} key: SP1_DB_HOST - name: SP1_DB_PORT valueFrom: secretKeyRef: name: {{ .Values.secretsData.secretName }} key: SP1_DB_PORT further down in same deployment file, I define secrets-store-volume as : volumes: - name: secrets-store-volume csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: aws-secret-provider-class All drivers are installed into cluster and permissions are set accordingly with helm install mydeployment helm-folder/ --dry-run I can see all the files and values are populated as expected. Then with helm install mydeployment helm-folder/ I install the deployment into my cluster but with kubectl get all I can see the pod is stuck at Pending with warning Error: 'aws-secrets' not found and eventually gets timeout. In AWS CloudTrail log, I can see that the cluster made request to access the secret and there was no error fetching it. How can I solve this or maybe further debug it? Thank you for your time and efforts.
Error: 'aws-secrets' not found - looks like CSI Driver isn't creating kubernetes secret that you're using to reference values Since yaml files looks correctly, I would say it's probably CSI Driver configuration Sync as Kubernetes secret - syncSecret.enabled (which is false by default) So make sure that secrets-store-csi-driver runs with this flag set to true, for example: helm upgrade --install csi-secrets-store \ --namespace kube-system secrets-store-csi-driver/secrets-store-csi-driver \ --set grpcSupportedProviders="aws" --set syncSecret.enabled="true"
Google endpoint path template "Path does not match any requirement URI template."
Hi to all I created and used openAPI by yaml and I created endpoint that maps 2 cloud functions which use path templating to route the call no error by google sdk cli. Now I call by POST https://myendpointname-3p5hncu3ha-ew.a.run.app/v1/setdndforrefcli/12588/dnd?key=[apikey] because it's mapped by below open api and reply me "Path does not match any requirement URI template.". I don't know why path template in endpoint not work I added path_translation: APPEND_PATH_TO_ADDRESS to avoid google to use CONSTANT_ADDRESS default which append id in query string with brutal [name of cloud function]?GETid=12588 and overwrite query parameters with same name. Somebody can tell me how can I debug the endpoint or the error in openAPI (that have green check ok icon in endpoint)? # [START swagger] swagger: '2.0' info: description: "Get data " title: "Cloud Endpoint + GCF" version: "1.0.0" host: myendpointname-3p5hncu3ha-ew.a.run.app # [END swagger] basePath: "/v1" #consumes: # - application/json #produces: # - application/json schemes: - https paths: /setdndforrefcli/{id}/dnd: post: summary: operationId: setdndforrefcli parameters: - name: id # is the id parameter in the path in: path # is the parameter where is in query for rest or path for restful required: true type: integer format: int64 minimum: 1 security: - api_key: [] x-google-backend: address: https://REGION-PROJECT-ID.cloudfunctions.net/mycloudfunction path_translation: APPEND_PATH_TO_ADDRESS protocol: h2 responses: '200': description: A successful response schema: type: string # [START securityDef] securityDefinitions: # This section configures basic authentication with an API key. api_key: type: "apiKey" name: "key" in: "query" # [END securityDef]
I had the same error, but after did some test I was able to use successfully the path templating (/endpoint/{id}). I resolved this issue as follows: 1 .- gcloud endpoints services deploy openapi-functions.yaml \ --project project Here you will get a new Service Configuration that you will to use in the next steps. 2.- chmod +x gcloud_build_image ./gcloud_build_image -s SERVICE \ -c NEWSERVICECONFIGURATION -p project Its very important change the service configuration with every new deployment of the managed service. 3.- gcloud run deploy SERVICE \ --image="gcr.io/PROJECT/endpoints-runtime-serverless:SERVICE-NEW_SERVICE_CONFIGURATION" \ --allow-unauthenticated \ --platform managed \ --project=PROJECT
Unable to access Cloud Endpoints on GCP Endpoints Portal
I created the API using GKE and Cloud Endpoint gRPC everything is fine but when I try to access my API from Endpoints Portal this is not working. EndPoint Portal For API Enter any id in ayah_id and try to execute you will see error. ENOTFOUND: Error resolving domain "https://quran.endpoints.utopian-button-227405.cloud.goog" I don't know why this is not working even my API is running successfully on http://34.71.56.199/v1/image/ayah/ayah-1 I'm using Http Transcoding actual gRPC running on 34.71.56.199:81 I think I missed some configuration steps. Can someone please let me know what I miss. Update api_config.yaml # The configuration schema is defined by service.proto file # https://github.com/googleapis/googleapis/blob/master/google/api/service.proto type: google.api.Service config_version: 3 name: quran.endpoints.utopian-button-227405.cloud.goog usage: rules: # Allow unregistered calls for all methods. - selector: "*" allow_unregistered_calls: true # # API title to appear in the user interface (Google Cloud Console). # title: Quran gRPC API apis: - name: quran.Audio - name: quran.Ayah - name: quran.Edition - name: quran.Image - name: quran.Surah - name: quran.Translation Update 2 api_config.yaml # The configuration schema is defined by service.proto file # https://github.com/googleapis/googleapis/blob/master/google/api/service.proto type: google.api.Service config_version: 3 name: quran.endpoints.utopian-button-227405.cloud.goog endpoints: - name: quran.endpoints.utopian-button-227405.cloud.goog target: "34.71.56.199" usage: rules: # Allow unregistered calls for all methods. - selector: "*" allow_unregistered_calls: true # # API title to appear in the user interface (Google Cloud Console). # title: Quran gRPC API apis: - name: quran.Audio - name: quran.Ayah - name: quran.Edition - name: quran.Image - name: quran.Surah - name: quran.Translation api_config_http.yaml # The configuration schema is defined by service.proto file # https://github.com/googleapis/googleapis/blob/master/google/api/service.proto type: google.api.Service config_version: 3 name: quran.endpoints.utopian-button-227405.cloud.goog # # Http Transcoding. # # HTTP rules define translation from HTTP/REST/JSON to gRPC. With these rules # HTTP/REST/JSON clients will be able to call the Quran service. # http: rules: # # Image Service transcoding # - selector: quran.Image.CreateImage post: '/v1/image' body: '*' - selector: quran.Image.FindImageByAyahId get: '/v1/image/ayah/{id}'