I want to use jetty instead of tomcat, but how to set mysql max idel by jetty?
Environment:spring boot, gradle
spring:
datasource:
url: jdbc:mysql://${MYSQL_HOST:cc-mysql}:${MYSQL_PORT:3306}/cc
username: ${MYSQL_USERNAME:root}
password: ${MYSQL_PASSWORD:root}
driver-class-name: com.mysql.jdbc.Driver
tomcat:
max-active: ${MYSQL_MAX_ACTIVE:10}
max-idle: ${MYSQL_MAX_IDEL:1}
jpa:
show-sql: false
hibernate:
ddl-auto: update
The configurations within spring.datasource.tomcat.* is for configuring the Tomcat DataSource Pool. (aka org.apache.tomcat.jdbc.pool.DataSource)
A more complete example with the missing (implied?) spring.datasource.type option.
spring:
application:
name: My Example Application
datasource:
name: example
url: jdbc:mysql://dev.mysql.example.com:4444/db?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&autoReconnect=true&failOverReadOnly=false
username: userexample
password: userpass
type: org.apache.tomcat.jdbc.pool.DataSource
driver-class-name: com.mysql.jdbc.Driver
tomcat:
max-active: 5
max-idle: 5
min-idle: 5
initial-size: 5
validation-query: 'select 1'
test-on-borrow: true
test-on-return: true
test-while-idle: true
max-wait: 15000
Jetty doesn't come with a container specific DataSource Pool.
At this point you have two options.
Use the mysql datasource directly. (this is the simplest approach)
Use one of the many DataSource pooling libraries out there instead.
If you choose to use a DataSource Pool library, it will have its own configuration that you'll have to learn to to know how to tweak it. The configuration max-idle may not exist, or may even be named differently in that library, or may exist but not be configured via a simple number, or it might refer to minutes (or milliseconds) instead of seconds.
Related
We are using Google Cloud Build as CI/CD tool and we use private pools to be able to connect to our database using private IPs.
Since 08/27 our builds using private pools are stuck in Queued and are never executed ou fail due to timeout, they just hang there until we cancel them.
We have already tried without success:
Change the worker pool to another region (from southamerica-east1 to us-central1);
Recreate the worker pool with different configurations;
Recreate all triggers and connections.
Removing the worker pool configuration (running the build in global) executed the build.
cloudbuild.yaml:
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
id: Backup database
args: ['gcloud', 'sql', 'backups', 'create', '--instance=${_DATABASE_INSTANCE_NAME}']
- name: 'node:14.17.4-slim'
id: Migrate database
entrypoint: npm
dir: 'build'
args: ['...']
secretEnv: ['DATABASE_URL']
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
id: Migrate traffic to new version
dir: 'build'
entrypoint: bash
args: ['-c', 'gcloud app services set-traffic ${_SERVICE_NAME} --splits ${_VERSION_NAME}=1']
availableSecrets:
secretManager:
- versionName: '${_DATABASE_URL_SECRET}'
env: 'DATABASE_URL'
options:
pool:
name: 'projects/$PROJECT_ID/locations/southamerica-east1/workerPools/<project-id>'
our worker pool configuration:
$ gcloud builds worker-pools describe <worker-pool-id> --region=southamerica-east1 --project=<project-id>
createTime: '2021-08-30T19:35:57.833710523Z'
etag: W/"..."
name: <worker-pool-id>
privatePoolV1Config:
networkConfig:
egressOption: PUBLIC_EGRESS
peeredNetwork: projects/<project-id>/global/networks/default
workerConfig:
diskSizeGb: '1000'
machineType: e2-medium
state: RUNNING
uid: ...
updateTime: '2021-08-30T20:14:13.918712802Z'
It was my last week discussion with the Cloud Build PM... TL;DR: if you haven't support subscription, or a corporate account, you can't (for now)
In detail, you can check the 1. link of RJC, you will get that
If you have a closer look, you can see (with my personal account, even if I have an Organization structure) the Concurrent Builds per worker pool is set to 0. That is the reason of your infinite queue of your build job.
The most annoying part is this one. Click on a Concurrent build per worker pool line checkbox and then click on edit, to change the limit. Here what you get
Read carefully: set a limit between 0 and 0.
Therefore, if you haven't support subscription (like me) you can't use the feature with your personal account. I was able to use it with my corporate account, even if I shouldn't...
For now, I haven't a solution, only this latest message from the PM
The behaviour around quota restrictions in private pools is a recent change that we're still iterating on and appreciate the feedback to make it easier for personal accounts to try out the feature.
The build in queue state can have the following possible reasons:
Concurrency limits. Cloud Build enforces quotas on running builds for various reasons. As a default, Cloud Build has only 10 concurrent build limit, whilst as per Worker Pool, it has a 30 concurrent build limit. You can also further check in this link for the quotas limit.
Using a custom machine size. In addition to the standard machine type, Cloud Build provides four high-CPU virtual machine types to run your builds.
You are using worker pools alpha and has too few nodes available.
Additionally, if the issue still persist, you can submit a bug under Google Cloud. I see that your colleague already submitted a public issue tracker in this link. In addition, if you have a free trial or paid support plan, it would be better to use it to file an issue.
I am setting up a GCP url map to route requests to backend services based on cookie values. Since cookies would have multiple key values, I am trying to use a regex matcher.
I need to route requests to backends based on region value from cookie.
A typical cookie would look like this: foo=bar;region=eu;variant=beta;
defaultService: https://www.googleapis.com/compute/v1/projects/<project_id>/global/backendServices/multi-region-1
kind: compute#urlMap
name: regex-url-map
hostRules:
- hosts:
- '*'
pathMatcher: path-matcher-1
pathMatchers:
- defaultService: https://www.googleapis.com/compute/v1/projects/<project_id>/global/backendServices/multi-region-1
name: path-matcher-1
routeRules:
- matchRules:
- prefixMatch: /
headerMatches:
- headerName: Cookie
regexMatch: (region=us)
priority: 0
service: https://www.googleapis.com/compute/v1/projects/<project_id>/global/backendServices/multi-region-1
- matchRules:
- prefixMatch: /
headerMatches:
- headerName: Cookie
regexMatch: (region=eu)
priority: 1
service: https://www.googleapis.com/compute/v1/projects/<project_id>/global/backendServices/multi-region-2
However, this url-map fails validation with this error:
$ gcloud compute url-maps validate --source regex-url-map.yaml
result:
loadErrors:
- HttpHeaderMatch has no predicates specified
loadSucceeded: false
testPassed: false
Please note that an exact match with cookie passes validation and matches correctly if cookie value is just something like this: region=us. The headerMatches section for exact match would look like this:
headerMatches:
- headerName: Cookie
exactMatch: region=us
Any pointers on what am I doing wrong here?
Thanks!
Your way of reasoning is correct but the feature you're trying to use is unsupported in external load balancing in GCP; it works only with internal load balancing.
Look at the last phrase from the documentation:
Note that regexMatch only applies to Loadbalancers that have their loadBalancingScheme set to INTERNAL_SELF_MANAGED.
I know it isn't the answer you're looking for but you can always file a new feature request on Google's IssueTracker and explain in detail what you want, how it could work etc.
You can always try to pass the region value in the http request - instead of requesting https://myhost.com all the time - also if you could add a suffix, for example: https://myhost.com/region1 it would allow the GCP load balancer rules to process it and direct the traffic to the backend you wish.
Have a look at this example what you can and can't do with forwarding rules in GCP. Another example here. And another one (mine) explaining how to use pathMatcher to direct traffic to different backend services.
Using: Pivotal Cloudfoundry v2.x, Spring Cloud Data Flow Server v1.6.2.RELEASE, SQL Server 2016.
The server datasource configuration does not appear to successfully create a datasource if it is not a service bound to the application within PCF.
The SQL Server database is not a service provisioned within our PCF marketplace. I've rebuilt the server application and added the SQL Server jdbc driver jar to the classpath. I also included the datasource configuration:
---
applications:
- path: spring-cloud-dataflow-server-cloudfoundry-1.6.2.RELEASE.jar
name: dataflow-server
host: dataflow-server
memory: 4096M
disk_quota: 2048M
no-route: false
no-hostname: false
health-check-type: 'port'
buildpack: java_buildpack_offline
env:
JAVA_OPTS: -Dhttp.keepAlive=false
JBP_CONFIG_CONTAINER_CERTIFICATE_TRUST_STORE: '{enabled: true}'
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE: channing
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_APP_NAME_PREFIX: channing
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL: https://api.pcf.com
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN: pcf.com
SPRING_APPLICATION_NAME: dataflow-server
SPRING_DATASOURCE_URL: jdbc:sqlserver://nonpcf.sqlserver.com\\DBINSTANCE:1713;databaseName=SCDF_DEV
SPRING_DATASOURCE_DRIVER_CLASS_NAME: com.microsoft.sqlserver.jdbc.SQLServerDriver
SPRING_DATASOURCE_USERNAME: username
SPRING_DATASOURCE_PASSWORD: password
services:
- config-server
- rabbit
security:
basic:
enabled: true
realm: Spring Cloud Data Flow
spring:
cloud:
dataflow:
features:
analytics-enabled: false
The error occurs during application startup, stating an unresolved dependency where there is no unique instance of javax.sql.DataSource available for injection.
Here is some stacktrace:
2018-10-23T09:39:14.365-06:00 [APP/PROC/WEB/0] [OUT] Caused by: org.springframework.cloud.CloudException: No unique service matching interface javax.sql.DataSource found. Expected 1, found 0
2018-10-23T09:39:14.365-06:00 [APP/PROC/WEB/0] [OUT] at org.springframework.cloud.Cloud.getSingletonServiceConnector(Cloud.java:197) ~[spring-cloud-connectors-core-2.0.2.RELEASE.jar!/:na]
2018-10-23T09:39:14.365-06:00 [APP/PROC/WEB/0] [OUT] at org.springframework.cloud.config.java.CloudServiceConnectionFactory.dataSource(CloudServiceConnectionFactory.java:56) ~[spring-cloud-spring-service-connector-2.0.2.RELEASE.jar!/:na]
2018-10-23T09:39:14.365-06:00 [APP/PROC/WEB/0] [OUT] at org.springframework.cloud.dataflow.server.cloudfoundry.config.DataSourceCloudConfig.scdfCloudDataSource(DataSourceCloudConfig.java:47) ~[spring-cloud-dataflow-server-cloudfoundry-autoconfig-1.6.2.RELEASE.jar!/:1.6.2.RELEASE]
Is this on purpose? How can we bind the PCF SCDF server with a datasource that is not resident within the foundation?
Spring Cloud Data Flow's CF-server builds upon an opinion of relying on Spring Cloud Connector for datasource and connection-pool customization.
Since we do this intentionally to take advantage of the automation provided by the library, we don't have a direct ability to turn it off in SCDF itself.
However, there's an option to turn-off Spring Cloud Connector from interfering entirely, and that option is available as a Spring Boot property (i.e., spring.cloud=false), which also applies to SCDF, too.
With this property set at the CF-server, you'd be able to create a connection pool using the SPRING_DATASOURCE_* properties like how it is defined in the manifest.yml above.
UPDATE
Background: The declarative datasource overrides and Spring Cloud Connector (in the classpath) are mutually exclusive and they cannot work together in any capacity.
Hence, it is advised to stick to a single model when customizing the CF-server. The easiest solution in this case, of course, is to disable connector altogether.
I want to create two databases in one cloudsql instance.
But if it is written in the following way it will result in an error.
resources:
- name: test-instance
type: sqladmin.v1beta4.instance
properties:
region: us-central
backendType: SECOND_GEN
instanceType: CLOUD_SQL_INSTANCE
settings:
tier: db-f1-micro
- name: test_db1
type: sqladmin.v1beta4.database
properties:
instance: $(ref.test-instance.name)
charset: utf8mb4
collation: utf8mb4_general_ci
- name: test_db2
type: sqladmin.v1beta4.database
properties
instance: $(ref.test-instance.name)
charset: utf8mb4
collation: utf8mb4_general_ci
output:
ERROR: (gcloud.deployment-manager.deployments.create) Error in Operation
[operation-********]
- code: RESOURCE_ERROR
location: /deployments/sample-deploy/resources/test_db2
message:
'{"ResourceType":"sqladmin.v1beta4.database","ResourceErrorCode":"403","ResourceErrorMessage":{"code":403,"errors":[{"domain":"global","message":"Operation
failed because another operation was already in progress.","reason":"operationInProgress"}],"message":"Operation
failed because another operation was already in progress.","statusMessage":"Forbidden","requestPath":"https://www.googleapis.com/sql/v1beta4/projects/****/instances/test-instance/databases","httpMethod":"POST"}}'
Please tell me what to do to resolve the error.
The error “ResourceErrorCode” is an error which originates with the CloudSQL API.
The issue here is that Deployment Manager will try to run all resource modifications in parallel (unless you specify a dependency between resources). Deployment Manager is a declarative configuration, it will run the deployments in parallel either they are independent of each other or not.
In this specific case, CloudSQL is not able to create two databases at the same time. This is why you are seeing the error message: Operation failed because another operation was already in progress.
There can be only one pending operation at a given point of time because of the inherent system architecture. This is a limitation on the concurrent writes to a CloudSQL database.
To resolve this issue, you will have to create the two databases in sequence, not in parallel.
For more information on how to do so, you may consult the documentation on this matter.
I have installed GitLab Omnibus Community Edition 8.0.2 for evaluation purpose. I am trying to connect Gitlab (Linux AMI on AWS) with our on-premise LDAP server running on Win 2008 R2. However, i am unable to do so. I am getting following error (Could not authorize you from Ldapmain because "Invalid credentials"):
Here's the config i'm using for LDAP in gitlab.rb
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS' # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'LDAP'
host: 'XX.YYY.Z.XX'
port: 389
uid: 'sAMAccountName'
method: 'plain' # "tls" or "ssl" or "plain"
bind_dn: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
password: 'pwd1234'
active_directory: true
allow_username_or_email_login: true
base: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
user_filter: ''
EOS
There are two users: gitlab (newly created AD user) and john.doe (old AD user)
Both users are able to query all AD users using ldapsearch command but when i use their respective details (one at a time) in gitlab.rb and run gitlab-rake gitlab:ldap:check command, it displays info about that particular user only and not all users.
Earlier, gitlab-rake gitlab:ldap:check was displaying first 100 results from AD when my credential (john.doe) was configured in gitlab.rb file. Since this was my personal credential, i asked my IT team to create a new AD user (gitlab) for GitLab. After i configured new user (gitlab) in gitlab.rb file and ran gitlab-rake gitlab:ldap:check, it only displayed that particular user's record. I thought this might be due to some permission issue for the newly-created user so i restored my personal credentials in gitlab.rb. Surprisingly, now when i run gitlab-rake gitlab:ldap:check, i get only one record for my user instead of 100 records that i was getting earlier. This is really weird! I think, somehow, GitLab is "forgetting" previous details.
Any help will really be appreciated.
The issue is resolved now. Seems like it was a bug in the version (8.0.2) i was using. Upgrading it to 8.0.5 fixed my issue.
Also, values of bind_dn and base that worked for me are:
bind_dn: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
base: 'OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'