Getting deployment error while installing PCF Dev - amazon-web-services

I am trying to install pcf dev on local machine which have window 10 using below link.
https://pivotal.io/platform/pcf-tutorials/getting-started-with-pivotal-cloud-foundry-dev/install-pcf-dev
During installation I am getting below error in deplo-pass.log
Task 576 | 10:31:39 | Preparing deployment: Preparing deployment
(00:01:46) Task 576 | 10:34:11 | Preparing package compilation:
Finding packages to compile (00:00:01) Task 576 | 10:34:12 | Creating
missing vms: database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) Task
576 | 10:34:12 | Creating missing vms:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) Task 576 | 10:34:12
| Creating missing vms: control/29cf5702-030d-4cac-ac9d-5e4221562e3a
(0) Task 576 | 10:34:12 | Creating missing vms:
compute/927d5018-9f8d-4b94-aa37-fee45aef2280 (0) Task 576 | 10:34:12 |
Creating missing vms: router/d3df4a57-43dd-491d-91ce-c9eda8ca88f8 (0)
Task 576 | 10:34:46 | Creating missing vms:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) (00:00:34) Task 576
| 10:34:48 | Creating missing vms:
router/d3df4a57-43dd-491d-91ce-c9eda8ca88f8 (0) (00:00:36) Task 576 |
10:34:48 | Creating missing vms:
compute/927d5018-9f8d-4b94-aa37-fee45aef2280 (0) (00:00:36) Task 576 |
10:34:49 | Creating missing vms:
database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) (00:00:37) Task 576
| 10:34:57 | Creating missing vms:
control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0) (00:00:45) Task 576 |
10:34:57 | Updating instance database:
database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) (canary) (00:06:47)
Task 576 | 10:41:44 | Updating instance blobstore:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) (canary) (00:01:03)
Task 576 | 10:42:47 | Updating instance control:
control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0) (canary) (01:22:36)
L Error: 'control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0)' is not running
after update. Review logs for failed jobs: routing-api,
cloud_controller_clock, credhub Task 576 | 12:05:25 | Error:
'control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0)' is not running
after update. Review logs for failed jobs: routing-api,
cloud_controller_clock, credhub
how to reviews logs of failing jobs? Is any ways to see logs failed jobs: routing-api, cloud_controller_clock, credhub ?

You need to install the bosh cli first. https://bosh.io/docs/cli-v2-install/
Make sure bosh is installed:
my-mac: bosh -v
version 6.2.1-a28042ac-2020-02-10T18:41:00Z
Succeeded
Set the environment variables for bosh to connect to pcf-dev:
my-mac: cf dev bosh
Usage: eval $(cf dev bosh env)
my-mac: eval $(cf dev bosh env)
Ask bosh to show the name of your cf deployment, in this case cf-66ade9481d314315358c is the name:
my-mac: bosh deployments
Using environment '10.144.0.2' as client 'ops_manager'
Name Release(s) Stemcell(s) Team(s)
cf-66ade9481d314315358c binary-buildpack/1.0.30 bosh-warden-boshlite-ubuntu-xenial-go_agent/170.30 -
bosh-dns/1.10.0
bosh-dns-aliases/0.0.3
bpm/1.0.3
capi/1.71.4
cf-cli/1.9.0
cf-networking/2.18.2
cf-syslog-drain/8.1
cflinuxfs2/1.267.0
cflinuxfs3/0.62.0
consul/198
consul-drain/0.0.3
credhub/2.1.2
diego/2.22.1
dotnet-core-buildpack/2.2.5
garden-runc/1.18.0
go-buildpack/1.8.33
java-offline-buildpack/4.16.1
log-cache/2.0.1
loggregator/103.4
loggregator-agent/2.3
nats/26
nodejs-buildpack/1.6.43
php-buildpack/4.3.70
push-apps-manager-release/667.0.6
pxc/0.14.2
python-buildpack/1.6.28
routing/0.184.0
ruby-buildpack/1.7.31
silk/2.18.1
staticfile-buildpack/1.4.39
statsd-injector/1.5.0
uaa/66.0
1 deployments
Succeeded
Retrieve te logs with bosh using the name from the name column:
my-mac: bosh --deployment cf-66ade9481d314315358c logs
Using environment '10.144.0.2' as client 'ops_manager'
Using deployment 'cf-66ade9481d314315358c'
Task 784
Task 784 | 17:54:41 | Fetching logs for router/8828d2fd-10f6-4d1e-9a3f-ac3d6ef6b833 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for database/e3b3bb98-1e73-41cf-be32-324280615a3b (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for compute/0820e2d7-998a-405a-bbea-e73a76ec26b4 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for blobstore/5c3297a4-4ad0-43bc-8041-000ca8e38e28 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for control/bf60350d-aab0-4db3-8cc2-05d666f5f3a8 (0): Finding and packing log files
Task 784 | 17:54:42 | Fetching logs for compute/0820e2d7-998a-405a-bbea-e73a76ec26b4 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for router/8828d2fd-10f6-4d1e-9a3f-ac3d6ef6b833 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for database/e3b3bb98-1e73-41cf-be32-324280615a3b (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for blobstore/5c3297a4-4ad0-43bc-8041-000ca8e38e28 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for control/bf60350d-aab0-4db3-8cc2-05d666f5f3a8 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:43 | Fetching group of logs: Packing log files together
Task 784 Started Sat May 9 17:54:41 UTC 2020
Task 784 Finished Sat May 9 17:54:43 UTC 2020
Task 784 Duration 00:00:02
Task 784 done
Downloading resource 'f7d8c6d3-43f8-419a-a436-53a38155af47' to '/Users/my-mac/workspace/pcf-dev/cf-66ade9481d314315358c-20200509-195443-771607.tgz'...
0.00%
Succeeded
Unpack your downloaded log archive.

Related

Redash container broke on "worker" in Fargate

I use the recommended Redash docker image , docker-compose tested locally, all services are up & running. (server, scheduler, worker, redis, postgres) http://localhost:5000/setup is up and running
% docker-compose up -d
Docker Compose is now in the Docker CLI, try `docker compose up`
Creating network "redash_default" with the default driver
Creating redash_postgres_1 ... done
Creating redash_redis_1 ... done
Creating redash_server_1 ... done
Creating redash_scheduler_1 ... done
Creating redash_adhoc_worker_1 ... done
Creating redash_scheduled_worker_1 ... done
Creating redash_nginx_1 ... done
% docker-compose run --rm server create_db
Creating redash_server_run ... done
[2021-10-29 23:53:52,904][PID:1][INFO][alembic.runtime.migration] Context impl PostgresqlImpl.
[2021-10-29 23:53:52,905][PID:1][INFO][alembic.runtime.migration] Will assume transactional DDL.
[2021-10-29 23:53:52,933][PID:1][INFO][alembic.runtime.migration] Running stamp_revision -> 89bc7873a3e0
I build the image from this version & push to ECR, configured Fargate Task Definition to run from this image. In task definition I mapped ports (6379, 5432, 5000, 80 any possible ports). The task is showing worker timed out.
2021-10-29 18:31:08[2021-10-29 23:31:08,742][PID:6104][DEBUG][redash.query_runner] Registering Vertica (vertica) query runner.
2021-10-29 18:31:08[2021-10-29 23:31:08,744][PID:6104][DEBUG][redash.query_runner] Registering ClickHouse (clickhouse) query runner.
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6103)
[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6103)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6104)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6105)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6106)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [6106] [INFO] Worker exiting (pid: 6106)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [6105] [INFO] Worker exiting (pid: 6105)
I manually added two environmental variables REDASH_REDIS_URL & REDASH_DATABASE_URL. They are not working either. My EC2 can access RDS, so it's not the db problem.
ubuntu#ip-10-8-0-191:~$ psql -h p-d-mt-a-redash-rds-instance-1.c5clgmj5xaif.us-west-2.rds.amazonaws.com -p 5432 -U postgresPassword for user postgres:
psql (10.18 (Ubuntu 10.18-0ubuntu0.18.04.1), server 13.4)
WARNING: psql major version 10, server major version 13.
Some psql features might not work.
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
postgres=> \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
mytable | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
rdsadmin | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | rdsadmin=CTc/rdsadmin
template0 | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/rdsadmin +
| | | | | rdsadmin=CTc/rdsadmin
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(5 rows)
postgres=>
Does anyone know how to make the "WORKER" working? (not timed out, exit) Does it need a lot of configuration in order to make Redash launched by Fargate? BTW, I created Redis & RDS in AWS. They are all the same VPC, security group inbound configured too.

Trouble with creating first Substrate chain

When I try to compile the Node Template I get a series of errors.
error: failed to run custom build command for node-template-runtime v2.0.0 (/Users/Modulus3D/VSCode Projects/substrate-node-template/runtime)
Caused by:
process didn't exit successfully: /Users/Modulus3D/VSCode Projects/substrate-node-template/target/release/build/node-template-runtime-cae9ad6029c9f681/build-script-build (exit code: 1)
--- stdout
Executing build command: "rustup" "run" "nightly" "cargo" "rustc" "--target=wasm32-unknown-unknown" "--manifest-path=/Users/Modulus3D/VSCode Projects/substrate-node-template/target/release/wbuild/node-template-runtime/Cargo.toml" "--color=always" "--release"
and also:
error[E0282]: type annotations needed
--> /Users/Modulus3D/.cargo/registry/src/github.com-1ecc6299db9ec823/sp-arithmetic-2.0.0/src/fixed_point.rs:541:9
|
541 | let accuracy = P::ACCURACY.saturated_into();
| ^^^^^^^^ consider giving accuracy a type
...
1595 | / implement_fixed!(
1596 | | FixedI64,
1597 | | test_fixed_i64,
1598 | | i64,
... |
1601 | | "Fixed Point 64 bits signed, range = [-9223372036.854775808, 9223372036.854775807]",
1602 | | );
| |__- in this macro invocation
|
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
error[E0282]: type annotations needed
--> /Users/Modulus3D/.cargo/registry/src/github.com-1ecc6299db9ec823/sp-arithmetic-2.0.0/src/fixed_point.rs:541:9
|
541 | let accuracy = P::ACCURACY.saturated_into();
| ^^^^^^^^ consider giving accuracy a type
...
1604 | / implement_fixed!(
1605 | | FixedI128,
1606 | | test_fixed_i128,
1607 | | i128,
... |
1611 | | [-170141183460469231731.687303715884105728, 170141183460469231731.687303715884105727]_",
1612 | | );
| |__- in this macro invocation
|
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
error[E0282]: type annotations needed
--> /Users/Modulus3D/.cargo/registry/src/github.com-1ecc6299db9ec823/sp-arithmetic-2.0.0/src/fixed_point.rs:541:9
|
541 | let accuracy = P::ACCURACY.saturated_into();
| ^^^^^^^^ consider giving accuracy a type
...
1614 | / implement_fixed!(
1615 | | FixedU128,
1616 | | test_fixed_u128,
1617 | | u128,
... |
1621 | | [0.000000000000000000, 340282366920938463463.374607431768211455]_",
1622 | | );
| |__- in this macro invocation
|
= note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info)
error: aborting due to 3 previous errors
For more information about this error, try rustc --explain E0282.
error: could not compile sp-arithmetic
To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: build failed
warning: build failed, waiting for other jobs to finish...
error: build failed
Any suggestions on how to resolve these errors??
Look like you need to downgrade you nightly version.
You can do so by running the following sequence of commands:
rustup install nightly-2020-10-06
rustup target add wasm32-unknown-unknown --toolchain nightly-2020-10-06
export WASM_BUILD_TOOLCHAIN=nightly-2020-10-06
You can learn more about how nightly is used with substrate here: https://substrate.dev/docs/en/knowledgebase/getting-started/#rust-nightly-toolchain
Fresh project always facing the efferent nightly versions issue.
If you running the substrate version 2.0.0 you can solve it with bellow command:
rustup install nightly-2020-07-02
rustup override set nightly-2020-07-02
rustup target add wasm32-unknown-unknown --toolchain nightly-2020-07-02
then try to re build again!

Opendaylight bundles in GracePeriod and cluster not coming up

We are using ODL Nitrogen version. When we perform warm start (ie., restart Karaf servers, without deleting "KARAF_HOME/data" folder following bundles are in "GracePeriod" state for a long time, hence other application bundles that are dependent on this are failing. However when we start Karaf in a clean (without data folder) state, all bundles comes up fine.
We also noticed, netty.tcp port 2550 is not getting binded when bundles goes into failure state. Confirmed this port is not being used by other process also.
349 | GracePeriod | 80 | 2.3.3 | mdsal-eos-binding-adapter
350 | Active | 80 | 2.3.3 | mdsal-eos-binding-api
351 | Active | 80 | 2.3.3 | mdsal-eos-common-api
352 | Active | 80 | 2.3.3 | mdsal-eos-common-spi
376 | GracePeriod | 80 | 2.3.3 | mdsal-singleton-dom-impl
142 | Active | 80 | 2.4.20 | akka-actor
143 | Active | 80 | 2.4.20 | akka-cluster
144 | Active | 80 | 2.4.20 | akka-osgi
145 | Active | 80 | 2.4.20 | akka-persistence
146 | Active | 80 | 2.4.20 | akka-protobuf
147 | Active | 80 | 2.4.20 | akka-remote
148 | Active | 80 | 2.4.20 | akka-slf4j
149 | Active | 80 | 2.4.20 | akka-stream
310 | Active | 80 | 1.6.3 | org.opendaylight.controller.sal-akka-raft
We also observe following logs rolling continuously and only this message is coming very frequently. It seems that its not allowing any other bundles to co-perform.
2018-07-02 22:52:47,299 | WARN | saction-25-27'}} | 298 - org.opendaylight.controller.config-manager - 0.7.3 | DeadlockMonitor$DeadlockMonitorRunnable | ModuleIdentifier{factoryName='binding-broker-impl', instanceName='binding-broker-impl'} did not finish after 84984 ms
2018-07-02 22:52:50,717 | ERROR | rint Extender: 3 | 325 - org.opendaylight.controller.sal-distributed-datastore - 1.6.3 | AbstractDataStore | Shard leaders failed to settle in 90 seconds, giving up
Diag output of Graceperiod bundle
karaf#virtuora>diag 349
mdsal-eos-binding-adapter (349)
-------------------------------
Status: GracePeriod
Blueprint
7/3/18 6:17 PM
Missing dependencies:
(objectClass=org.opendaylight.mdsal.binding.dom.codec.api.BindingNormalizedNodeSerializer) (objectClass=org.opendaylight.mdsal.eos.dom.api.DOMEntityOwnershipService)
karaf#virtuora>diag 376
mdsal-singleton-dom-impl (376)
------------------------------
Status: GracePeriod
Blueprint
7/3/18 6:22 PM
Missing dependencies:
(objectClass=org.opendaylight.mdsal.eos.dom.api.DOMEntityOwnershipService)
Please let us know
why akka is unable to open netty tcp port
why DOMEntityOwnershipService and BindingNormalizedNodeSerializer
You need to set SO_REUSEADDR to enable the port to be directly reused after it is closed. See https://docs.oracle.com/javase/7/docs/api/java/net/StandardSocketOptions.html#SO_REUSEADDR
If you do not set this option then the port will stay blocked for a while dependent on the operation system.
You should also not forcefully kill a process if possible as this does not cleanly shut down the ports.

Cf-deployment fails on bosh-lite VM on Openstack

I have setup a bosh-lite VM on Openstack and now wanted to deploy CF.
cf-deployment fails with the following errors:
Task 28 | 06:16:17 | Creating missing vms: router/96e38261-9287-4a69-b526-eb361eb36d84 (0) (00:00:01)
L Error: CPI error 'Bosh::Clouds::CloudError' with message 'Creating VM with agent ID '{{0d5412ff-4140-4c10-9bcc-b82f8b8595ca}}': Unmarshaling VM properties: json: cannot unmarshal object into Go value of type string' in 'create_vm' CPI method
Task 28 | 06:16:17 | Creating missing vms: tcp-router/0a492310-b1d7-4092-9f12-7bbdaeafa51f (0) (00:00:01)
L Error: CPI error 'Bosh::Clouds::CloudError' with message 'Creating VM with agent ID '{{e2c6f33e-46b6-4600-a32e-8d113d327dd7}}': Unmarshaling VM properties: json: cannot unmarshal object into Go value of type string' in 'create_vm' CPI method
Task 28 | 06:16:25 | Creating missing vms: database/eb7319a4-83af-4463-8cb9-455c2f3689c9 (0) (00:00:09)
Task 28 | 06:16:25 | Creating missing vms: nats/96b398f8-cde3-4c7f-840a-d9abbe66bf4c (0) (00:00:09)
Task 28 | 06:16:25 | Creating missing vms: adapter/29400d43-6d55-49b7-8368-591f4f6357cd (0) (00:00:09)
Task 28 | 06:16:25 | Creating missing vms: cc-worker/fe3e01ce-26ff-45cd-97cd-8b771af7ba7d (0) (00:00:09)
Task 28 | 06:16:26 | Creating missing vms: scheduler/59b98563-5b97-41a3-ac17-9b65998d5091 (0) (00:00:10)
Task 28 | 06:16:26 | Creating missing vms: singleton-blobstore/233fa4c4-7ee0-4b03-83e1-2b958014bad5 (0) (00:00:10)
Task 28 | 06:16:26 | Creating missing vms: doppler/880250b2-9418-4c02-9423-15bf0abe01fb (0) (00:00:10)
Task 28 | 06:16:26 | Creating missing vms: consul/d362ee07-bfc2-4569-a702-6aa9b2806c2b (0) (00:00:10)
Task 28 | 06:16:27 | Creating missing vms: log-api/7387cb18-e24e-4129-be2b-7fecfb2e3170 (0) (00:00:11)
Task 28 | 06:16:27 | Creating missing vms: api/4806fa7e-d2be-4403-b9cc-f4c3cd32269d (0) (00:00:11)
Task 28 | 06:16:27 | Creating missing vms: diego-cell/91a70e59-e815-495b-8f17-f34bfaabb3b2 (0) (00:00:11)
Task 28 | 06:16:28 | Creating missing vms: uaa/c2fb065b-84b7-42fe-85fd-400947ca48f6 (0) (00:00:12)
Task 28 | 06:16:28 | Creating missing vms: diego-api/737edb1d-4e81-48f6-9c22-169e64a3c8bb (0) (00:00:12)
Task 28 | 06:16:28 | Error: CPI error 'Bosh::Clouds::CloudError' with message 'Creating VM with agent ID '{{0d5412ff-4140-4c10-9bcc-b82f8b8595ca}}': Unmarshaling VM properties: json: cannot unmarshal object into Go value of type string' in 'create_vm' CPI method

NoHttpResponseException on uploading file to S3 (camel-aws)

I am trying to upload around 10 GB file from my local machine to S3 (inside a camel route). Although file gets uploaded in around 3-4 minutes, but it also throwing following exception:
2014-06-26 13:53:33,417 | INFO | ads.com/outbound | FetchRoute | 167 - com.ut.ias - 2.0.3 | Download complete to local. Pushing file to S3
2014-06-26 13:54:19,465 | INFO | manager-worker-6 | AmazonHttpClient | 144 - org.apache.servicemix.bundles.aws-java-sdk - 1.5.1.1 | Unable to execute HTTP request: The target server failed to respond
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:95)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:62)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)[141:org.apache.httpcomponents.httpcore:4.2.4]
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)[141:org.apache.httpcomponents.httpcore:4.2.4]
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)[141:org.apache.httpcomponents.httpcore:4.2.4]
.......
at java.util.concurrent.FutureTask.run(FutureTask.java:262)[:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_55]
at java.lang.Thread.run(Thread.java:744)[:1.7.0_55]
2014-06-26 13:55:08,991 | INFO | ads.com/outbound | FetchRoute | 167 - com.ut.ias - 2.0.3 | Upload complete.
Due to which camel route doesn't stop and it is continuously throwing InterruptedException:
2014-06-26 13:55:11,182 | INFO | ads.com/outbound | SftpOperations | 110 - org.apache.camel.camel-ftp - 2.12.1 | JSCH -> Disconnecting from cxportal.integralads.com port 22
2014-06-26 13:55:11,183 | INFO | lads.com session | SftpOperations | 110 - org.apache.camel.camel-ftp - 2.12.1 | JSCH -> Caught an exception, leaving main loop due to Socket closed
2014-06-26 13:55:11,183 | WARN | lads.com session | eventadmin | 139 - org.apache.felix.eventadmin - 1.3.2 | EventAdmin: Exception: java.lang.InterruptedException
java.lang.InterruptedException
at EDU.oswego.cs.dl.util.concurrent.LinkedQueue.offer(Unknown Source)[139:org.apache.felix.eventadmin:1.3.2]
at EDU.oswego.cs.dl.util.concurrent.PooledExecutor.execute(Unknown Source)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.tasks.DefaultThreadPool.executeTask(DefaultThreadPool.java:101)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.tasks.AsyncDeliverTasks.execute(AsyncDeliverTasks.java:105)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.handler.EventAdminImpl.postEvent(EventAdminImpl.java:100)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.adapter.LogEventAdapter$1.logged(LogEventAdapter.java:281)[139:org.apache.felix.eventadmin:1.3.2]
at org.ops4j.pax.logging.service.internal.LogReaderServiceImpl.fire(LogReaderServiceImpl.java:134)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.LogReaderServiceImpl.fireEvent(LogReaderServiceImpl.java:126)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl.handleEvents(PaxLoggingServiceImpl.java:180)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:145)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.internal.TrackingLogger.inform(TrackingLogger.java:86)[18:org.ops4j.pax.logging.pax-logging-api:1.7.1]
at org.ops4j.pax.logging.slf4j.Slf4jLogger.info(Slf4jLogger.java:476)[18:org.ops4j.pax.logging.pax-logging-api:1.7.1]
at org.apache.camel.component.file.remote.SftpOperations$JSchLogger.log(SftpOperations.java:359)[110:org.apache.camel.camel-ftp:2.12.1]
at com.jcraft.jsch.Session.run(Session.java:1621)[109:org.apache.servicemix.bundles.jsch:0.1.49.1]
at java.lang.Thread.run(Thread.java:744)[:1.7.0_55]
Please see my code below and let me know, where I am going wrong:
TransferManager tm = new TransferManager(
S3Client.getS3Client());
// TransferManager processes all transfers asynchronously,
// so this call will return immediately.
Upload upload = tm.upload(
Utils.getProperty(Constants.BUCKET),
getS3Key(file.getName()), file);
try {
upload.waitForCompletion();
logger.info("Upload complete.");
} catch (AmazonClientException amazonClientException) {
logger.warn("Unable to upload file, upload was aborted.");
amazonClientException.printStackTrace();
}
The stacktrace doesn't even have any reference to my code, hence couldn't determine where the issue is.
Any help or pointer would be really appreciated.
Thanks