[Amazon](500150) SocketTimeoutException. Works locally but not on Lambda [Redshift, Spring Data JDBC, Spring Boot] - amazon-web-services

I am working on a simple API to query Redshift and I am encountering nothing but problems. The current one is that I am getting a SocketTimeoutException when I deploy it to Lambda. Googling this exception has tons of recommendations to add "client CIDR/IP address to the VPC security group". However, my credentials (and IP) work fine for me to access the Redshift DB from my DB Client (DBeaver), and when I run my Spring Boot application locally and call it from Postman. But once it is on Lambda I get the SocketTimeoutException.
I am reaching out to the team to check if I do need to whitelist an IP, but the headache I was having before this was about Spring Boot taking too long to build causing other types of time outs and I have a feeling that this issue has more to do with Spring Boot than it does with my Redshift connection.
Reasons I suspect this:
1. as I mentioned, I have had timeout issues for days but it only switched to the socket timeout when I went from variations of the suggested:
public StreamLambdaHandler() throws ContainerInitializationException {
long startTime = Instant.now().toEpochMilli();
handler = new SpringBootProxyHandlerBuilder()
.defaultProxy()
.asyncInit(startTime)
.springBootApplication(Application.class)
.buildAndInitialize();
}
to what a different API my company is using has:
private static SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;
static {
try {
handler = SpringBootLambdaContainerHandler.getAwsProxyHandler(Application.class);
} catch (ContainerInitializationException e) {
e.printStackTrace();
throw new RuntimeException("Could not initialize Spring Boot application", e);
}
}
2 My company deploys a much heavier api (with many endpoints, service classes, etc) that is only 60kb whereas my single endpoint api I am packaging as shaded with all the dependencies which put it at a whopping 19.6MB! I am guessing this might be affecting the load time?
3 it takes 4.227 seconds to load locally.
Full Stack Trace is really really long, but here is the bit I think is most relevant:
2023-02-06T07:13:30.139-06:00 INIT_START Runtime Version: java:11.v15 Runtime Version ARN: arn:aws:lambda:us-east-1::runtime:blahhalb
2023-02-06T07:13:30.715-06:00 13:13:30.711 [main] INFO com.amazonaws.serverless.proxy.internal.LambdaContainerHandler - Starting Lambda Container Handler
*****Starts app at 7:13:31*****
2023-02-06T07:13:31.634-06:00 . ____ _ __ _ _
2023-02-06T07:13:31.634-06:00 /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
2023-02-06T07:13:31.634-06:00 ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
2023-02-06T07:13:31.634-06:00 \\/ ___)| |_)| | | | | || (_| | ) ) ) )
2023-02-06T07:13:31.634-06:00 ' |____| .__|_| |_|_| |_\__, | / / / /
2023-02-06T07:13:31.634-06:00 =========|_|==============|___/=/_/_/_/
2023-02-06T07:13:31.638-06:00 :: Spring Boot ::
2023-02-06T07:13:31.834-06:00 2023-02-06 13:13:31.833 INFO 9 --- [ main] lambdainternal.AWSLambda : Starting AWSLambda using Java 11.0.14.1 on 169.254.10.245 with PID 9 (/var/runtime/lib/aws-lambda-java-runtime-0.2.0.jar started by sbx_user1051 in /var/task)
2023-02-06T07:13:31.835-06:00 2023-02-06 13:13:31.835 INFO 9 --- [ main] lambdainternal.AWSLambda : No active profile set, falling back to default profiles: default
2023-02-06T07:13:32.722-06:00 2023-02-06 13:13:32.722 INFO 9 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JDBC repositories in DEFAULT mode.
2023-02-06T07:13:32.787-06:00 2023-02-06 13:13:32.787 INFO 9 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 58 ms. Found 1 JDBC repository interfaces.
2023-02-06T07:13:33.194-06:00 2023-02-06 13:13:33.194 INFO 9 --- [ main] c.a.s.p.i.servlet.AwsServletContext : Initializing Spring embedded WebApplicationContext
2023-02-06T07:13:33.194-06:00 2023-02-06 13:13:33.194 INFO 9 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1281 ms
2023-02-06T07:13:33.587-06:00 2023-02-06 13:13:33.585 INFO 9 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2023-02-06T07:13:40.762-06:00 13:13:40.758 [main] INFO
***** After failing to make connection after 7 seconds, restarts app*****
com.amazonaws.serverless.proxy.internal.LambdaContainerHandler - Starting Lambda Container Handler
2023-02-06T07:13:41.613-06:00 . ____ _ __ _ _
2023-02-06T07:13:41.613-06:00 /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
2023-02-06T07:13:41.613-06:00 ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
2023-02-06T07:13:41.613-06:00 \\/ ___)| |_)| | | | | || (_| | ) ) ) )
2023-02-06T07:13:41.613-06:00 ' |____| .__|_| |_|_| |_\__, | / / / /
2023-02-06T07:13:41.613-06:00 =========|_|==============|___/=/_/_/_/
2023-02-06T07:13:41.616-06:00 :: Spring Boot ::
2023-02-06T07:13:41.807-06:00 2023-02-06 13:13:41.805 INFO 12 --- [ main] lambdainternal.AWSLambda : Starting AWSLambda using Java 11.0.14.1 on 169.254.10.245 with PID 12 (/var/runtime/lib/aws-lambda-java-runtime-0.2.0.jar started by sbx_user1051 in /var/task)
2023-02-06T07:13:41.807-06:00 2023-02-06 13:13:41.807 INFO 12 --- [ main] lambdainternal.AWSLambda : No active profile set, falling back to default profiles: default
2023-02-06T07:13:42.699-06:00 2023-02-06 13:13:42.699 INFO 12 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JDBC repositories in DEFAULT mode.
2023-02-06T07:13:42.762-06:00 2023-02-06 13:13:42.761 INFO 12 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 56 ms. Found 1 JDBC repository interfaces.
2023-02-06T07:13:43.160-06:00 2023-02-06 13:13:43.160 INFO 12 --- [ main] c.a.s.p.i.servlet.AwsServletContext : Initializing Spring embedded WebApplicationContext
2023-02-06T07:13:43.160-06:00 2023-02-06 13:13:43.160 INFO 12 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1277 ms
2023-02-06T07:13:43.549-06:00 2023-02-06 13:13:43.548 INFO 12 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting...
2023-02-06T07:14:14.685-06:00 2023-02-06 13:14:14.684 ERROR 12 --- [ main]
*****Tries to make a connection for 31 seconds before giving me the SocketTimeoutException*****
com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
2023-02-06T07:14:14.685-06:00 java.sql.SQLException: [Amazon](500150) Error setting/closing connection: SocketTimeoutException.
2023-02-06T07:14:14.685-06:00 at com.amazon.redshift.client.PGClient.connect(Unknown Source) ~[task/:na]
2023-02-06T07:14:14.685-06:00 at com.amazon.redshift.client.PGClient.<init>(Unknown Source) ~[task/:na]
2023-02-06T07:14:14.685-06:00 at com.amazon.redshift.core.PGJDBCConnection.connect(Unknown Source) ~[task/:na]
2023-02-06T07:14:14.685-06:00 at com.amazon.jdbc.common.BaseConnectionFactory.doConnect(Unknown Source) ~[task/:na]
2023-02-06T07:14:14.685-06:00 at com.amazon.jdbc.common.AbstractDriver.connect(Unknown Source) ~[task/:na]
2023-02-06T07:14:14.685-06:00 at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[task/:na]
Is it possible that this is a Spring Boot build timeout exception? or is it much more likely that it is in fact a Redshift connection issue?

SO your use case is to write an AWS Lambda function that can perform CRUD operations on a Redshift cluster? If so, you can implement this use case by using the Java Lambda run-time API.
com.amazonaws.services.lambda.runtime.RequestHandler
To perform Redshift data CRUD operations from Lambda, you can use software.amazon.awssdk.services.redshiftdata.RedshiftDataClient.
Once you setup your Lambda function correctly, you can use the Redshift data client to modify the data. For example:
private RedshiftDataClient getClient() {
Region region = Region.US_WEST_2;
RedshiftDataClient redshiftDataClient = RedshiftDataClient.builder()
.region(region)
.build();
return redshiftDataClient;
}
public void delPost(String id) {
try {
RedshiftDataClient redshiftDataClient = getClient();
String sqlStatement = "DELETE FROM blog WHERE idblog = '" + id + "'";
ExecuteStatementRequest statementRequest = ExecuteStatementRequest.builder()
.clusterIdentifier(clusterId)
.database(database)
.dbUser(dbUser)
.sql(sqlStatement)
.build();
redshiftDataClient.executeStatement(statementRequest);
} catch (RedshiftDataException e) {
System.err.println(e.getMessage());
System.exit(1);
}
}
ALso - as your Lambda function invokes Amazon Redshift, the IAM role that the Lambda function uses must have a policy that enables it to invoke this AWS Service from the Lambda function.
To conclude, you can use RedshiftDataClient as opposed to Spring APIs to insert/modify/delete Redshift data from an AWS Lambda function.

Related

Listing AWS S3 bucket content from Springboot With Apache Camel

I have a sprintboot project running v2.5.4 which works fine.
I have access to S3 and im able to list the content of a bucket i have created. So i wanted to experiment with Apache Camel to try to just list the content of a bucket which should be pretty simple according to the examples. But i keep getting errors.
I added 2 dependencies to my build.gradle
implementation group: 'org.apache.camel.springboot', name: 'camel-core-starter', version: '3.13.0'
implementation group: 'org.apache.camel.springboot', name: 'camel-aws2-s3-starter', version: '3.13.0'
and then i simply created a SimpleRouteBuilder.java
#Component
public class SimpleRouteBuilder extends RouteBuilder {
#Override
public void configure() throws Exception {
from("aws2-s3://bucketName?amazonS3Client=#createS3Client&operation=listObjects&accessKey=xxxAccessKeyxxx&secretKey=xxxSecretKeyxxx")
.log("Received body: ");
}
And i keep getting this stacktrace
On my aws s3 client factory i have set the bean name
#Slf4j
#Configuration
public class S3ClientBeanFactory {
#Bean(name = "s3Client")
and this seems to work - when i change the name to something else i get
an error about this :
No bean could be found in the registry for:S3Client
But with the "s3client" set in the camel endpoint url i get this all the time
2021-12-13 12:23:25.036 INFO [,,] 28267 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Apache Camel 3.13.0 (camel-1) shutdown in 4ms (uptime:511ms)
2021-12-13 12:23:25.045 INFO [,,] 28267 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2021-12-13 12:23:25.069 INFO [,,] 28267 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2021-12-13 12:23:25.085 ERROR [,,] 28267 --- [ main] o.s.boot.SpringApplication : Application run failed
org.apache.camel.FailedToStartRouteException: Failed to start route route1 because of null
2021-12-13 12:23:25.036 INFO [,,] 28267 --- [ main] o.a.c.impl.engine.AbstractCamelContext : Apache Camel 3.13.0 (camel-1) shutdown in 4ms (uptime:511ms)
2021-12-13 12:23:25.045 INFO [,,] 28267 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2021-12-13 12:23:25.069 INFO [,,] 28267 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2021-12-13 12:23:25.085 ERROR [,,] 28267 --- [ main] o.s.boot.SpringApplication : Application run failed
org.apache.camel.FailedToStartRouteException: Failed to start route route1 because of null
at org.apache.camel.impl.engine.RouteService.warmUp(RouteService.java:123)
at org.apache.camel.impl.engine.InternalRouteStartupManager.doWarmUpRoutes(InternalRouteStartupManager.java:306)
at org.apache.camel.impl.engine.InternalRouteStartupManager.safelyStartRouteServices(InternalRouteStartupManager.java:189)
at org.apache.camel.impl.engine.InternalRouteStartupManager.doStartOrResumeRoutes(InternalRouteStartupManager.java:147)
at org.apache.camel.impl.engine.AbstractCamelContext.doStartCamel(AbstractCamelContext.java:3201)
at org.apache.camel.impl.engine.AbstractCamelContext.doStartContext(AbstractCamelContext.java:2863)
at org.apache.camel.impl.engine.AbstractCamelContext.doStart(AbstractCamelContext.java:2814)
at org.apache.camel.spring.boot.SpringBootCamelContext.doStart(SpringBootCamelContext.java:43)
at org.apache.camel.support.service.BaseService.start(BaseService.java:119)
at org.apache.camel.impl.engine.AbstractCamelContext.start(AbstractCamelContext.java:2510)
at org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.java:246)
at org.apache.camel.spring.SpringCamelContext.start(SpringCamelContext.java:119)
at org.apache.camel.spring.SpringCamelContext.onApplicationEvent(SpringCamelContext.java:151)
at org.springframework.context.event.SimpleApplicationEventMulticaster.doInvokeListener(SimpleApplicationEventMulticaster.java:176)
at org.springframework.context.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:169)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:143)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:421)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:378)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:938)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:586)
at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:145)
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754)
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:434)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:338)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1343)
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1332)
at dk.danskespil.scratchgames.ScratchgamesApplication.main(ScratchgamesApplication.java:22)
Caused by: software.amazon.awssdk.services.s3.model.S3Exception: null (Service: S3, Status Code: 403, Request ID: null, Extended Request ID: FknaUW6/yRkYvJry9d8oIWU2hC4aRk7z8ilAZZxlcDN4s+P4bAoyzWVriJxUYj2bCyzCFFMSGNY=)
Is this operation not possible or what am i missing to do such simple operation ?
I ran into the same issue with Apache Camel component aws2-s3:// which was caused by insufficient rights on AWS S3:
Caused by: software.amazon.awssdk.services.s3.model.S3Exception: null (Service: S3, Status Code: 403, ...)
But I have to mention that S3Client from Amazon SDK works well for reading files with the same rights = the same account.
Explanation: I found that this aws2-s3:// component requires making headBucket API call (and the others) which causes the error because of insufficient rights for making this api call.
Spring doesn't consider configuration beans to be special. It is possible that the route bean is created before the S3 client bean.
I'd try using the #DependsOn({"s3ClientBeanFactory"})-annotation on the route bean. More on it here: Controlling Bean Creation Order with #DependsOn Annotation

Redash container broke on "worker" in Fargate

I use the recommended Redash docker image , docker-compose tested locally, all services are up & running. (server, scheduler, worker, redis, postgres) http://localhost:5000/setup is up and running
% docker-compose up -d
Docker Compose is now in the Docker CLI, try `docker compose up`
Creating network "redash_default" with the default driver
Creating redash_postgres_1 ... done
Creating redash_redis_1 ... done
Creating redash_server_1 ... done
Creating redash_scheduler_1 ... done
Creating redash_adhoc_worker_1 ... done
Creating redash_scheduled_worker_1 ... done
Creating redash_nginx_1 ... done
% docker-compose run --rm server create_db
Creating redash_server_run ... done
[2021-10-29 23:53:52,904][PID:1][INFO][alembic.runtime.migration] Context impl PostgresqlImpl.
[2021-10-29 23:53:52,905][PID:1][INFO][alembic.runtime.migration] Will assume transactional DDL.
[2021-10-29 23:53:52,933][PID:1][INFO][alembic.runtime.migration] Running stamp_revision -> 89bc7873a3e0
I build the image from this version & push to ECR, configured Fargate Task Definition to run from this image. In task definition I mapped ports (6379, 5432, 5000, 80 any possible ports). The task is showing worker timed out.
2021-10-29 18:31:08[2021-10-29 23:31:08,742][PID:6104][DEBUG][redash.query_runner] Registering Vertica (vertica) query runner.
2021-10-29 18:31:08[2021-10-29 23:31:08,744][PID:6104][DEBUG][redash.query_runner] Registering ClickHouse (clickhouse) query runner.
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6103)
[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6103)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6104)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6105)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6106)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [6106] [INFO] Worker exiting (pid: 6106)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [6105] [INFO] Worker exiting (pid: 6105)
I manually added two environmental variables REDASH_REDIS_URL & REDASH_DATABASE_URL. They are not working either. My EC2 can access RDS, so it's not the db problem.
ubuntu#ip-10-8-0-191:~$ psql -h p-d-mt-a-redash-rds-instance-1.c5clgmj5xaif.us-west-2.rds.amazonaws.com -p 5432 -U postgresPassword for user postgres:
psql (10.18 (Ubuntu 10.18-0ubuntu0.18.04.1), server 13.4)
WARNING: psql major version 10, server major version 13.
Some psql features might not work.
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
postgres=> \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
mytable | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
rdsadmin | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | rdsadmin=CTc/rdsadmin
template0 | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/rdsadmin +
| | | | | rdsadmin=CTc/rdsadmin
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(5 rows)
postgres=>
Does anyone know how to make the "WORKER" working? (not timed out, exit) Does it need a lot of configuration in order to make Redash launched by Fargate? BTW, I created Redis & RDS in AWS. They are all the same VPC, security group inbound configured too.

Getting deployment error while installing PCF Dev

I am trying to install pcf dev on local machine which have window 10 using below link.
https://pivotal.io/platform/pcf-tutorials/getting-started-with-pivotal-cloud-foundry-dev/install-pcf-dev
During installation I am getting below error in deplo-pass.log
Task 576 | 10:31:39 | Preparing deployment: Preparing deployment
(00:01:46) Task 576 | 10:34:11 | Preparing package compilation:
Finding packages to compile (00:00:01) Task 576 | 10:34:12 | Creating
missing vms: database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) Task
576 | 10:34:12 | Creating missing vms:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) Task 576 | 10:34:12
| Creating missing vms: control/29cf5702-030d-4cac-ac9d-5e4221562e3a
(0) Task 576 | 10:34:12 | Creating missing vms:
compute/927d5018-9f8d-4b94-aa37-fee45aef2280 (0) Task 576 | 10:34:12 |
Creating missing vms: router/d3df4a57-43dd-491d-91ce-c9eda8ca88f8 (0)
Task 576 | 10:34:46 | Creating missing vms:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) (00:00:34) Task 576
| 10:34:48 | Creating missing vms:
router/d3df4a57-43dd-491d-91ce-c9eda8ca88f8 (0) (00:00:36) Task 576 |
10:34:48 | Creating missing vms:
compute/927d5018-9f8d-4b94-aa37-fee45aef2280 (0) (00:00:36) Task 576 |
10:34:49 | Creating missing vms:
database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) (00:00:37) Task 576
| 10:34:57 | Creating missing vms:
control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0) (00:00:45) Task 576 |
10:34:57 | Updating instance database:
database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) (canary) (00:06:47)
Task 576 | 10:41:44 | Updating instance blobstore:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) (canary) (00:01:03)
Task 576 | 10:42:47 | Updating instance control:
control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0) (canary) (01:22:36)
L Error: 'control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0)' is not running
after update. Review logs for failed jobs: routing-api,
cloud_controller_clock, credhub Task 576 | 12:05:25 | Error:
'control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0)' is not running
after update. Review logs for failed jobs: routing-api,
cloud_controller_clock, credhub
how to reviews logs of failing jobs? Is any ways to see logs failed jobs: routing-api, cloud_controller_clock, credhub ?
You need to install the bosh cli first. https://bosh.io/docs/cli-v2-install/
Make sure bosh is installed:
my-mac: bosh -v
version 6.2.1-a28042ac-2020-02-10T18:41:00Z
Succeeded
Set the environment variables for bosh to connect to pcf-dev:
my-mac: cf dev bosh
Usage: eval $(cf dev bosh env)
my-mac: eval $(cf dev bosh env)
Ask bosh to show the name of your cf deployment, in this case cf-66ade9481d314315358c is the name:
my-mac: bosh deployments
Using environment '10.144.0.2' as client 'ops_manager'
Name Release(s) Stemcell(s) Team(s)
cf-66ade9481d314315358c binary-buildpack/1.0.30 bosh-warden-boshlite-ubuntu-xenial-go_agent/170.30 -
bosh-dns/1.10.0
bosh-dns-aliases/0.0.3
bpm/1.0.3
capi/1.71.4
cf-cli/1.9.0
cf-networking/2.18.2
cf-syslog-drain/8.1
cflinuxfs2/1.267.0
cflinuxfs3/0.62.0
consul/198
consul-drain/0.0.3
credhub/2.1.2
diego/2.22.1
dotnet-core-buildpack/2.2.5
garden-runc/1.18.0
go-buildpack/1.8.33
java-offline-buildpack/4.16.1
log-cache/2.0.1
loggregator/103.4
loggregator-agent/2.3
nats/26
nodejs-buildpack/1.6.43
php-buildpack/4.3.70
push-apps-manager-release/667.0.6
pxc/0.14.2
python-buildpack/1.6.28
routing/0.184.0
ruby-buildpack/1.7.31
silk/2.18.1
staticfile-buildpack/1.4.39
statsd-injector/1.5.0
uaa/66.0
1 deployments
Succeeded
Retrieve te logs with bosh using the name from the name column:
my-mac: bosh --deployment cf-66ade9481d314315358c logs
Using environment '10.144.0.2' as client 'ops_manager'
Using deployment 'cf-66ade9481d314315358c'
Task 784
Task 784 | 17:54:41 | Fetching logs for router/8828d2fd-10f6-4d1e-9a3f-ac3d6ef6b833 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for database/e3b3bb98-1e73-41cf-be32-324280615a3b (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for compute/0820e2d7-998a-405a-bbea-e73a76ec26b4 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for blobstore/5c3297a4-4ad0-43bc-8041-000ca8e38e28 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for control/bf60350d-aab0-4db3-8cc2-05d666f5f3a8 (0): Finding and packing log files
Task 784 | 17:54:42 | Fetching logs for compute/0820e2d7-998a-405a-bbea-e73a76ec26b4 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for router/8828d2fd-10f6-4d1e-9a3f-ac3d6ef6b833 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for database/e3b3bb98-1e73-41cf-be32-324280615a3b (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for blobstore/5c3297a4-4ad0-43bc-8041-000ca8e38e28 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for control/bf60350d-aab0-4db3-8cc2-05d666f5f3a8 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:43 | Fetching group of logs: Packing log files together
Task 784 Started Sat May 9 17:54:41 UTC 2020
Task 784 Finished Sat May 9 17:54:43 UTC 2020
Task 784 Duration 00:00:02
Task 784 done
Downloading resource 'f7d8c6d3-43f8-419a-a436-53a38155af47' to '/Users/my-mac/workspace/pcf-dev/cf-66ade9481d314315358c-20200509-195443-771607.tgz'...
0.00%
Succeeded
Unpack your downloaded log archive.

It's impossible to unit-test a Spring Data JPA #Query, really?

First of all, the goal. What is unit test? Unit test is a test that tests the smallest piece of functionality, opposite to integration tests that test much more e.g.:
test that spawns production ApplicationContext in any means or any form is NOT a unit test;
test that touches or even aware of the Application (marked with #SpringBootApplication) class is NOT a unit test;
test that loads something from src/main/resources is NOT a unit test
test that loads external configuration from Spring Cloud Config server is definatelly not a unit test;
unit test of a repository must not start (or even know about) web, mvc or security concerns.
So, how to write a unit test for a Spring Data JPA repository? (or so popular and loved framework doesn't support pure unit test for such a thing?)
My project: Spring Cloud (Cloud Counfig service, Security OAuth2 service, eureka, zuul, authentication, authorization etc.)
Let's try to test the simplest repository:
public interface StudentRepository extends CrudRepository<Student, Integer> {
Optional<Student> findByStudentCode(Integer studentCode);
Optional<Student> findTopByOrderByStudentCodeDesc();
#Query(value = "SELECT COUNT(*) = 0 FROM t_student WHERE regexp_replace(LOWER(student_name), '\\s', '', 'g') = regexp_replace(LOWER(:suspect), '\\s', '', 'g')", nativeQuery = true)
boolean isStudentNameSpeciallyUnique(#Param("suspect") String studentName);
}
The student entity will have: id, code (natural ID), name, age. Nothing special.
And here is the test. We need a SUT (our repository) and entity manager to prepopulate the SUT. So:
#RunWith(SpringRunner.class)
#DataJpaTest // <-- loads full-blown production app context and fails!
public class StudentRepositoryTest {
#Autowired
TestEntityManager manager;
#Autowired
StudentRepository repository;
#Test
public void findByStudentCode_whenNoSuch_shouldReturnEmptyOptional() {
final int invalidStudentCode = 321;
Optional<Student> student = repository.findByStudentCode(invalidStudentCode);
assertFalse(student.isPresent());
}
}
Attemp to run it produces this:
...
2019-01-01 18:32:10.750 INFO 15868 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#2d72f75e: startup date [Tue Jan 01 18:32:10 EET 2019]; root of context hierarchy
2019-01-01 18:32:11.294 INFO 15868 --- [ main] f.a.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2019-01-01 18:32:11.421 INFO 15868 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$9ed1a748] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.0.6.RELEASE)
2019-01-01 18:32:30.694 INFO 15868 --- [ main] c.c.c.ConfigServicePropertySourceLocator : Fetching config from server at : http://localhost:8888\
...
The #DataJpaTest searches upwards the package hierrarchy and loads the production application:
package org.givespring.atry.university.studentservice;
#EnableResourceServer
#EnableJpaAuditing
#SpringBootApplication(scanBasePackages = "org.givespring.atry.university") // to discover MicroserviceProperties and custom usercontext stuff
public class StudentServiceApp {
public static void main(String[] args) {
SpringApplication.run(StudentServiceApp.class, args);
}
}
As a result, the error:
2019-01-01 18:32:32.789 ERROR 15868 --- [ main] o.s.boot.SpringApplication : Application run failed
java.lang.IllegalStateException: Unable to retrieve #EnableAutoConfiguration base packages
at org.springframework.boot.autoconfigure.AutoConfigurationPackages.get(AutoConfigurationPackages.java:76) ~[spring-boot-autoconfigure-2.0.6.RELEASE.jar:2.0.6.RELEASE]
at org.springframework.boot.autoconfigure.data.AbstractRepositoryConfigurationSourceSupport.getBasePackages(AbstractRepositoryConfigurationSourceSupport.java:79) ~[spring-boot-autoconfigure-2.0.6.RELEASE.jar:2.0.6.RELEASE]
at org.springframework.boot.autoconfigure.data.AbstractRepositoryConfigurationSourceSupport$1.getBasePackages(AbstractRepositoryConfigurationSourceSupport.java:73) ~[spring-boot-autoconfigure-2.0.6.RELEASE.jar:2.0.6.RELEASE]
at org.springframework.data.repository.config.RepositoryConfigurationSourceSupport.lambda$getCandidates$2(RepositoryConfigurationSourceSupport.java:77) ~[spring-data-commons-2.0.11.RELEASE.jar:2.0.11.RELEASE]
at org.springframework.data.util.LazyStreamable.stream(LazyStreamable.java:50) ~[spring-data-commons-2.0.11.RELEASE.jar:2.0.11.RELEASE]
at org.springframework.data.util.LazyStreamable.iterator(LazyStreamable.java:41) ~[spring-data-commons-2.0.11.RELEASE.jar:2.0.11.RELEASE]
at org.springframework.data.repository.config.RepositoryConfigurationExtensionSupport.getRepositoryConfigurations(RepositoryConfigurationExtensionSupport.java:87) ~[spring-data-commons-2.0.11.RELEASE.jar:2.0.11.RELEASE]
at org.springframework.data.repository.config.RepositoryConfigurationDelegate.registerRepositoriesIn(RepositoryConfigurationDelegate.java:126) ~[spring-data-commons-2.0.11.RELEASE.jar:2.0.11.RELEASE]
at org.springframework.boot.autoconfigure.data.AbstractRepositoryConfigurationSourceSupport.registerBeanDefinitions(AbstractRepositoryConfigurationSourceSupport.java:60) ~[spring-boot-autoconfigure-2.0.6.RELEASE.jar:2.0.6.RELEASE]
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.lambda$loadBeanDefinitionsFromRegistrars$1(ConfigurationClassBeanDefinitionReader.java:358) ~[spring-context-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at java.util.LinkedHashMap.forEach(LinkedHashMap.java:684) ~[na:1.8.0_121]
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsFromRegistrars(ConfigurationClassBeanDefinitionReader.java:357) ~[spring-context-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForConfigurationClass(ConfigurationClassBeanDefinitionReader.java:146) ~[spring-context-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitions(ConfigurationClassBeanDefinitionReader.java:118) ~[spring-context-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:328) ~[spring-context-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:233) ~[spring-context-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanDefinitionRegistryPostProcessors(PostProcessorRegistrationDelegate.java:271) ~[spring-context-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:91) ~[spring-context-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:692) ~[spring-context-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:530) ~[spring-context-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:754) ~[spring-boot-2.0.6.RELEASE.jar:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:386) ~[spring-boot-2.0.6.RELEASE.jar:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:307) ~[spring-boot-2.0.6.RELEASE.jar:2.0.6.RELEASE]
at org.springframework.boot.test.context.SpringBootContextLoader.loadContext(SpringBootContextLoader.java:127) [spring-boot-test-2.0.6.RELEASE.jar:2.0.6.RELEASE]
at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:99) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDelegate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:117) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.test.context.support.DefaultTestContext.getApplicationContext(DefaultTestContext.java:108) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.injectDependencies(DependencyInjectionTestExecutionListener.java:117) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.test.context.support.DependencyInjectionTestExecutionListener.prepareTestInstance(DependencyInjectionTestExecutionListener.java:83) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.boot.test.autoconfigure.SpringBootDependencyInjectionTestExecutionListener.prepareTestInstance(SpringBootDependencyInjectionTestExecutionListener.java:44) [spring-boot-test-autoconfigure-2.0.6.RELEASE.jar:2.0.6.RELEASE]
at org.springframework.test.context.TestContextManager.prepareTestInstance(TestContextManager.java:246) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.createTest(SpringJUnit4ClassRunner.java:227) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner$1.runReflectiveCall(SpringJUnit4ClassRunner.java:289) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) [junit-4.12.jar:4.12]
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.methodBlock(SpringJUnit4ClassRunner.java:291) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:246) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.runChild(SpringJUnit4ClassRunner.java:97) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) [junit-4.12.jar:4.12]
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) [junit-4.12.jar:4.12]
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) [junit-4.12.jar:4.12]
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) [junit-4.12.jar:4.12]
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) [junit-4.12.jar:4.12]
at org.springframework.test.context.junit4.statements.RunBeforeTestClassCallbacks.evaluate(RunBeforeTestClassCallbacks.java:61) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.springframework.test.context.junit4.statements.RunAfterTestClassCallbacks.evaluate(RunAfterTestClassCallbacks.java:70) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.junit.runners.ParentRunner.run(ParentRunner.java:363) [junit-4.12.jar:4.12]
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:190) [spring-test-5.0.10.RELEASE.jar:5.0.10.RELEASE]
at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12]
at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) [junit-rt.jar:na]
at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) [junit-rt.jar:na]
at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) [junit-rt.jar:na]
at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) [junit-rt.jar:na]
2019-01-01 18:32:32.792 INFO 15868 --- [ main] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#6a2eea2a: startup date [Tue Jan 01 18:32:32 EET 2019]; parent: org.springframework.context.annotation.AnnotationConfigApplicationContext#2d72f75e
In shortcut unit test is a fast test of a small unit. What is a unit depends on the developer or architecture guidelines. It can be any small, testable piece of code, such as a method or a class.
Also, If you think about it, there's no code you write for your repositories. You depend of Spring Platform. If you find some bug, feel free to raise a ticket
So there's no need to write any unit tests for Spring Data JPA repository methods.
If you need to write a unit test for a Spring Data JPA repository, I suppose your concern is related to methods: save, delete, findOne, findAll, etc. In this case, as you can see, this is not small, so an integration test is required.
This repository contains an example of integration test with spring jpa repository methods:
https://github.com/jrichardsz/spring-boot-templates/blob/master/003-hql-database-with-integration-test/src/test/java/test/CustomerRepositoryIntegrationTest.java
References:
https://stackoverflow.com/a/23442457/3957754
https://www.quora.com/What-is-the-real-efficiency-of-unit-testing

Docker on mac error message can't connect to Docker endpoint

I have tried the vagrant devenv for multi-peers network and it worked fine. now I am trying to do the same thing on mac, but I got such err message
vp_1 | 07:21:42.489 [dockercontroller] deployImage -> ERRO 04c Error building images: cannot connect to Docker endpoint
vp_1 | 07:21:42.489 [dockercontroller] deployImage -> ERRO 04d Image Output:
vp_1 | ********************
vp_1 |
vp_1 | ********************
vp_1 | 07:21:42.553 [dockercontroller] Start -> ERRO 05b start-could not recreate container cannot connect to Docker endpoint
vp_1 | 07:21:42.553 [container] unlockContainer -> DEBU 05c container lock deleted(dev-jdoe-04233c6dd8364b9f0749882eb6d1b50992b942aa0a664182946f411ab46802a88574932ccd75f8c75e780036e363d52dd56ccadc2bfde95709fc39148d76f050)
vp_1 | 07:21:42.553 [chaincode] Launch -> ERRO 05d launchAndWaitForRegister failed Error starting container: cannot connect to Docker endpoint
Belowing is my compose file,
vp:
image: hyperledger/fabric-peer
ports:
- "5000:5000"
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=http://127.0.0.1:2375
- CORE_LOGGING_LEVEL=DEBUG
command: peer node start
I have tried assigning endpoint to
"unix:///var/run/docker.sock" and it appear the other err message as belowing
vp_1 | 07:39:39.642 [dockercontroller] deployImage -> ERRO 045 Error building images: dial unix /var/run/docker.sock: connect: no such file or directory
vp_1 | 07:39:39.642 [dockercontroller] deployImage -> ERRO 046 Image Output:
While CORE_VM_ENDPOINT set to unix:///var/run/docker.sock, please make sure that var/run/docker.sock exists in your host. please mount it if its not exist.
Also, refer to the following question, Hyperledger Docker endpoint not found