Using: Pivotal Cloudfoundry v2.x, Spring Cloud Data Flow Server v1.6.2.RELEASE, SQL Server 2016.
The server datasource configuration does not appear to successfully create a datasource if it is not a service bound to the application within PCF.
The SQL Server database is not a service provisioned within our PCF marketplace. I've rebuilt the server application and added the SQL Server jdbc driver jar to the classpath. I also included the datasource configuration:
---
applications:
- path: spring-cloud-dataflow-server-cloudfoundry-1.6.2.RELEASE.jar
name: dataflow-server
host: dataflow-server
memory: 4096M
disk_quota: 2048M
no-route: false
no-hostname: false
health-check-type: 'port'
buildpack: java_buildpack_offline
env:
JAVA_OPTS: -Dhttp.keepAlive=false
JBP_CONFIG_CONTAINER_CERTIFICATE_TRUST_STORE: '{enabled: true}'
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE: channing
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_APP_NAME_PREFIX: channing
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL: https://api.pcf.com
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN: pcf.com
SPRING_APPLICATION_NAME: dataflow-server
SPRING_DATASOURCE_URL: jdbc:sqlserver://nonpcf.sqlserver.com\\DBINSTANCE:1713;databaseName=SCDF_DEV
SPRING_DATASOURCE_DRIVER_CLASS_NAME: com.microsoft.sqlserver.jdbc.SQLServerDriver
SPRING_DATASOURCE_USERNAME: username
SPRING_DATASOURCE_PASSWORD: password
services:
- config-server
- rabbit
security:
basic:
enabled: true
realm: Spring Cloud Data Flow
spring:
cloud:
dataflow:
features:
analytics-enabled: false
The error occurs during application startup, stating an unresolved dependency where there is no unique instance of javax.sql.DataSource available for injection.
Here is some stacktrace:
2018-10-23T09:39:14.365-06:00 [APP/PROC/WEB/0] [OUT] Caused by: org.springframework.cloud.CloudException: No unique service matching interface javax.sql.DataSource found. Expected 1, found 0
2018-10-23T09:39:14.365-06:00 [APP/PROC/WEB/0] [OUT] at org.springframework.cloud.Cloud.getSingletonServiceConnector(Cloud.java:197) ~[spring-cloud-connectors-core-2.0.2.RELEASE.jar!/:na]
2018-10-23T09:39:14.365-06:00 [APP/PROC/WEB/0] [OUT] at org.springframework.cloud.config.java.CloudServiceConnectionFactory.dataSource(CloudServiceConnectionFactory.java:56) ~[spring-cloud-spring-service-connector-2.0.2.RELEASE.jar!/:na]
2018-10-23T09:39:14.365-06:00 [APP/PROC/WEB/0] [OUT] at org.springframework.cloud.dataflow.server.cloudfoundry.config.DataSourceCloudConfig.scdfCloudDataSource(DataSourceCloudConfig.java:47) ~[spring-cloud-dataflow-server-cloudfoundry-autoconfig-1.6.2.RELEASE.jar!/:1.6.2.RELEASE]
Is this on purpose? How can we bind the PCF SCDF server with a datasource that is not resident within the foundation?
Spring Cloud Data Flow's CF-server builds upon an opinion of relying on Spring Cloud Connector for datasource and connection-pool customization.
Since we do this intentionally to take advantage of the automation provided by the library, we don't have a direct ability to turn it off in SCDF itself.
However, there's an option to turn-off Spring Cloud Connector from interfering entirely, and that option is available as a Spring Boot property (i.e., spring.cloud=false), which also applies to SCDF, too.
With this property set at the CF-server, you'd be able to create a connection pool using the SPRING_DATASOURCE_* properties like how it is defined in the manifest.yml above.
UPDATE
Background: The declarative datasource overrides and Spring Cloud Connector (in the classpath) are mutually exclusive and they cannot work together in any capacity.
Hence, it is advised to stick to a single model when customizing the CF-server. The easiest solution in this case, of course, is to disable connector altogether.
Related
We have deployed Istio 1.11.0 using helm-chart in our dev and production environment.
We are using below configuration in istio configmap, which we have updated via istio-control helm-chart.
meshConfig:
extensionProviders:
- name: "ext-authz-grpc"
envoyExtAuthzGrpc:
service: "ext-auth-service.default.svc.cluster.local"
port: "50051"
includeHeadersInCheck: [ "authorization", "ws-protocol" ]
headersToUpstreamOnAllow: [ "authorization", "x-role", "x-id" ]
accessLogFile: /dev/stdout
enablePrometheusMerge: true
Basically we are using grpc service for external authorization server.
Above configuration is working fine.
One of our client has deployed Istio 1.9.8 using operator. (They have their own deployment model for Istio. Not allowing us to deploy istio using helm-chart)
When we try to apply above changes using operator it gives us below error :
2022-04-05T10:23:09.657830Z info installer Loading values from compiled in VFS at path profiles/minimal.yaml
2022-04-05T10:23:09.657837Z info installer Loading values from compiled in VFS at path profiles/default.yaml
2022-04-05T10:23:09.679340Z error installer failed to merge base profile with user IstioOperator CR profile-poc-customized, failed to unmarshall mesh config: unknown field "includeHeadersInCheck" in v1alpha1.MeshConfig_ExtensionProvider_EnvoyExternalAuthorizationGrpcProvider moreInfo=The values in the selected spec.profile could not be merged with the user IstioOperator resource. impact=The operator controller cannot create and act upon the user defined IstioOperator resource. The Istio control plane will not be installed or updated. action=Check that the IstioOperator resource has the correct syntax. If you are sure your configuration is correct, see https://istio.io/latest/about/bugs for possible solutions. likelyCause=The likely cause is an incorrect or badly formatted configuration.Another possible cause could be an issue with the Istio code.
If we directly edit the configmap and make changes then it is able to apply those changes.
But its giving error when we are updating it from operator.
Can anybody help me to understand why its not working with operator?
includeHeadersInCheck is only available for http and not grpc:
https://istio.io/v1.10/docs/reference/config/istio.mesh.v1alpha1/#MeshConfig-ExtensionProvider-EnvoyExternalAuthorizationGrpcProvider
I have been wrestling with this for a couple of days now. I want to deploy Spring Cloud Data Flow Server for Cloud Foundry to my org's enterprise Pivotal Cloud Foundry instance. My problem is forcing all Data Flow Server web requests to TLS/HTTPS. Here is an example of a configuration I've tried to get this working:
# manifest.yml
---
applications:
- name: gdp-dataflow-server
buildpack: java_buildpack_offline
host: dataflow-server
memory: 2G
disk_quota: 2G
instances: 1
path: spring-cloud-dataflow-server-cloudfoundry-1.2.3.RELEASE.jar
env:
SPRING_APPLICATION_NAME: dataflow-server
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL: https://api.system.x.x.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_ORG: my-org
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE: my-space
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN: my-domain.io
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_USERNAME: user
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_PASSWORD: pass
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_SERVICES: dataflow-mq
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_BUILDPACK: java_buildpack_offline
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_TASK_SERVICES: dataflow-db
SPRING_APPLICATION_JSON: |
{
"server": {
"use-forward-headers": true,
"tomcat": {
"remote-ip-header": "x-forwarded-for",
"protocol-header": "x-forwarded-proto"
}
},
"management": {
"context-path": "/management",
"security": {
"enabled": true
}
},
"security": {
"require-ssl": true,
"basic": {
"enabled": true,
"realm": "Data Flow Server"
},
"user": {
"name": "dataflow-admin",
"password": "nimda-wolfatad"
}
}
services:
dataflow-db
dataflow-redis
Despite the security block in SPRING_APPLICATION_JSON, the Data Flow Server's web endpoints are still accessible via insecure HTTP. How can I force all requests to HTTPS? Do I need to customize my own build of the Data Flow Server for Cloud Foundry? I understand that PCF's proxy is terminating SSL/TLS at the load balancer, but configuring the forward headers should induce Spring Security/Tomcat to behave the way I want, should it not? I must be missing something obvious here, because this seems like a common desire that should not be this difficult.
Thank you.
There's nothing out-of-the-box from Spring Boot proper to enable/disable HTTPS and at the same time also intercept and auto-redirect plain HTTP -> HTTPS.
There are several online literatures on how to write a custom Configuration class to accept multiple-connectors in Spring Boot (see example).
Spring Cloud Data Flow (SCDF) is a simple Spring Boot application, so all this applies to the SCDF-server as well.
That said, if you intend to enforce HTTPS all throughout your application interaction, there is a PCF setting [Disable HTTP traffic to HAProxy] that can be applied as a global override in Elastic Runtime - see docs. This consistently applies it to all the applications and it is not just specific to Spring Boot or SCDF. Even Python or Node or other types of apps can be enforced to interact via HTTPS with this setting.
I am using latest WSO2 CEP (v4.1.0), and apache storm 0.9.6 for WSO2 CEP clustering in distributed manner (distributed mode deployment). I have followed the guidelines which was provided by WSO2 for CEP clustering.
After following those guidelines CEP is working properly. Now I want to make sure CEP is correctly clusterd or not.Is there any mechanism to check whether it is configured correctly.
You should be able to see similar logs like following. You can notice the IP and clustering port of other members in the log.
INFO - MemberUtils Added member: Host:192.168.1.100, Remote Host:null, Port: 4200, HTTP:-1, HTTPS:-1, Domain: null, Sub-domain:null, Active:true
INFO - HazelcastClusteringAgent Hazelcast initialized in 1283ms
INFO - HazelcastClusteringAgent Local member: [03fa03f7-176b-48d5-9173-48866d7dd641] - Host:192.168.1.100, Remote Host:null, Port: 4100, HTTP:8280, HTTPS:8243, Domain: wso2con.domain, Sub-domain:mgt, Active:true
INFO - HazelcastClusteringAgent Elected this member [03fa03f7-176b-48d5-9173-48866d7dd641] as the Coordinator node
[2015-09-14 11:31:44,162] INFO - WKABasedMembershipScheme Member joined [a0f1c3cd-adaf-4fdf-ac9f-d6c6f3508022]: /192.168.1.100:4300
[2015-09-14 11:31:46,230] INFO - MemberUtils Added member: Host:192.168.1.100, Remote Host:null, Port: 4300, HTTP:8282, HTTPS:8245, Domain: wso2con.domain, Sub-domain:worker, Active:true
I tried one of the sample which was provided by WSO2. Then Storm UI will show spouts and bolts which are created according to the given Siddhi query. Using that we can decide CEP is correctly clustered or not.
Apche storm UI shows the bolts and spouts in following manner.
I haven't been able to find much on the web about this problem but...
I am setting up a fresh BigTable cluster on Googles Cloud services. I've gone through the whole process you do with most Google APIs (create service account, know your project ID, authing with the gcloud tool, Google enviornment variable set, etc.).
I am having a problem though after going through the setup. I get this error I can't find anything on web on that says:
Caused by: com.google.bigtable.repackaged.com.google.common.util.concurrent.UncheckedExecutionException: io.grpc.StatusRuntimeException:
NOT_FOUND: Error listing tables for cluster projects/bigtable-1127/zones/us-central1-c/clusters/bigdatastats : Failed to read Tables in cluster: bigdatastats
Here is the complete print that includes the error..note that I get the same error when trying to create a table as well:
./bin/hbase com.google.cloud.bigtable.hbase.CheckConfig
User Agent: bigtable-hbase-1.0-0.2.1
Project ID: bigtable-1127
Cluster Id: bigdatastats
ZoneId: us-central1-c
Cluster admin host: bigtableclusteradmin.googleapis.com
Table admin host: bigtabletableadmin.googleapis.com
Data host: bigtable.googleapis.com
Attempting credential refresh...
HBase Connection Class = com.google.cloud.bigtable.hbase1_0.BigtableConnection (OK)
Opening table admin connection...
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/Michael/bigtable/hbase-1.0.1.1/lib/slf4j-log4j12-1.7.7.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/Cellar/hadoop/2.7.1/libexec/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-11-12 01:30:31,552 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2015-11-12 01:30:32,619 INFO [main] grpc.BigtableSession: Opening connection for projectId bigtable-1127, zoneId us-central1-c, clusterId bigdatastats, on data host bigtable.googleapis.com, table admin host bigtabletableadmin.googleapis.com.
Tables in cluster bigdatastats:
Exception in thread "main" java.io.IOException: Failed to listTables
at org.apache.hadoop.hbase.client.AbstractBigtableAdmin.requestTableList(AbstractBigtableAdmin.java:221)
at org.apache.hadoop.hbase.client.AbstractBigtableAdmin.listTableNames(AbstractBigtableAdmin.java:208)
at com.google.cloud.bigtable.hbase.CheckConfig.main(CheckConfig.java:99)
Caused by: com.google.bigtable.repackaged.com.google.common.util.concurrent.UncheckedExecutionException: io.grpc.StatusRuntimeException: NOT_FOUND: Error listing tables for cluster projects/bigtable-1127/zones/us-central1-c/clusters/bigdatastats : Failed to read Tables in cluster: bigdatastats
at io.grpc.stub.Calls.getUnchecked(Calls.java:117)
at io.grpc.stub.Calls.blockingUnaryCall(Calls.java:129)
at com.google.bigtable.admin.table.v1.BigtableTableServiceGrpc$BigtableTableServiceBlockingStub.listTables(BigtableTableServiceGrpc.java:338)
at com.google.cloud.bigtable.grpc.BigtableTableAdminGrpcClient.listTables(BigtableTableAdminGrpcClient.java:44)
at org.apache.hadoop.hbase.client.AbstractBigtableAdmin.requestTableList(AbstractBigtableAdmin.java:219)
... 2 more
Caused by: io.grpc.StatusRuntimeException: NOT_FOUND: Error listing tables for cluster projects/bigtable-1127/zones/us-central1-c/clusters/bigdatastats : Failed to read Tables in cluster: bigdatastats
at io.grpc.Status.asRuntimeException(Status.java:428)
at io.grpc.stub.Calls$UnaryStreamToFuture.onClose(Calls.java:324)
at io.grpc.ChannelImpl$CallImpl$ClientStreamListenerImpl$3.run(ChannelImpl.java:402)
at io.grpc.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
It would be amazing if someone could help with this. I am not sure what to do and I can't find anything out there. Obviously its around authentication, my key file is fresh and in the right place, ive ran the gcloud auth and Im not sure what else to check.
Please let me know if I can provide anymore information to help answer.
As noted in the comments, this was unlikely to have been an authentication issue.
You would receive NOT_FOUND as an error if the resource you are trying to query does not exist in your project. So it's likely that you wanted to switch your default project using gcloud config set project as recommended by Les.
Trying to run Presto coordinator server with discovery server embedded on AWS CDH4 cluster
config.properties:
coordinator=true
datasources=jmx
http-server.http.port=8000
presto-metastore.db.type=h2
presto-metastore.db.filename=var/db/MetaStore
task.max-memory=1GB
discovery-server.enabled=true
discovery.uri=http://ip-10-0-0-11:8000
When server starts it can't register itself with discovery (relevant logs):
2013-11-08T19:38:38.193+0000 WARN main Bootstrap Warning: Configuration property 'discovery.uri' is deprecated and should not be used
2013-11-08T19:38:38.968+0000 INFO main Bootstrap discovery-server.enabled false true
2013-11-08T19:38:38.975+0000 INFO main Bootstrap discovery.uri null http://ip-10-0-0-11:8000 Discovery service base URI
2013-11-08T19:38:40.916+0000 ERROR Discovery-0 io.airlift.discovery.client.CachingServiceSelector Cannot connect to discovery server for refresh (collector/general): Lookup of collector failed for http://ip-10-0-0-11:8000/v1/service/collector/general
2013-11-08T19:38:42.556+0000 ERROR Discovery-1 io.airlift.discovery.client.CachingServiceSelector Cannot connect to discovery server for refresh (presto/general): Lookup of presto failed for http://ip-10-0-0-11:8000/v1/service/presto/general
2013-11-08T19:38:43.854+0000 INFO main org.eclipse.jetty.server.AbstractConnector Started SelectChannelConnector#0.0.0.0:8000
Tried to also run standalone Discovery server, same effect. Looks that listener is started after registration attempt is made.
I was wondering if someone would notice this in the logs :) It's actually not a problem. The error appears because the discovery client starts before the discovery server is ready. You'll see "succeeded for refresh" shortly after in the logs which shows that it's working. We will fix the log message eventually but it's purely a cosmetic issue.