I have tried the vagrant devenv for multi-peers network and it worked fine. now I am trying to do the same thing on mac, but I got such err message
vp_1 | 07:21:42.489 [dockercontroller] deployImage -> ERRO 04c Error building images: cannot connect to Docker endpoint
vp_1 | 07:21:42.489 [dockercontroller] deployImage -> ERRO 04d Image Output:
vp_1 | ********************
vp_1 |
vp_1 | ********************
vp_1 | 07:21:42.553 [dockercontroller] Start -> ERRO 05b start-could not recreate container cannot connect to Docker endpoint
vp_1 | 07:21:42.553 [container] unlockContainer -> DEBU 05c container lock deleted(dev-jdoe-04233c6dd8364b9f0749882eb6d1b50992b942aa0a664182946f411ab46802a88574932ccd75f8c75e780036e363d52dd56ccadc2bfde95709fc39148d76f050)
vp_1 | 07:21:42.553 [chaincode] Launch -> ERRO 05d launchAndWaitForRegister failed Error starting container: cannot connect to Docker endpoint
Belowing is my compose file,
vp:
image: hyperledger/fabric-peer
ports:
- "5000:5000"
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=http://127.0.0.1:2375
- CORE_LOGGING_LEVEL=DEBUG
command: peer node start
I have tried assigning endpoint to
"unix:///var/run/docker.sock" and it appear the other err message as belowing
vp_1 | 07:39:39.642 [dockercontroller] deployImage -> ERRO 045 Error building images: dial unix /var/run/docker.sock: connect: no such file or directory
vp_1 | 07:39:39.642 [dockercontroller] deployImage -> ERRO 046 Image Output:
While CORE_VM_ENDPOINT set to unix:///var/run/docker.sock, please make sure that var/run/docker.sock exists in your host. please mount it if its not exist.
Also, refer to the following question, Hyperledger Docker endpoint not found
Related
I use the recommended Redash docker image , docker-compose tested locally, all services are up & running. (server, scheduler, worker, redis, postgres) http://localhost:5000/setup is up and running
% docker-compose up -d
Docker Compose is now in the Docker CLI, try `docker compose up`
Creating network "redash_default" with the default driver
Creating redash_postgres_1 ... done
Creating redash_redis_1 ... done
Creating redash_server_1 ... done
Creating redash_scheduler_1 ... done
Creating redash_adhoc_worker_1 ... done
Creating redash_scheduled_worker_1 ... done
Creating redash_nginx_1 ... done
% docker-compose run --rm server create_db
Creating redash_server_run ... done
[2021-10-29 23:53:52,904][PID:1][INFO][alembic.runtime.migration] Context impl PostgresqlImpl.
[2021-10-29 23:53:52,905][PID:1][INFO][alembic.runtime.migration] Will assume transactional DDL.
[2021-10-29 23:53:52,933][PID:1][INFO][alembic.runtime.migration] Running stamp_revision -> 89bc7873a3e0
I build the image from this version & push to ECR, configured Fargate Task Definition to run from this image. In task definition I mapped ports (6379, 5432, 5000, 80 any possible ports). The task is showing worker timed out.
2021-10-29 18:31:08[2021-10-29 23:31:08,742][PID:6104][DEBUG][redash.query_runner] Registering Vertica (vertica) query runner.
2021-10-29 18:31:08[2021-10-29 23:31:08,744][PID:6104][DEBUG][redash.query_runner] Registering ClickHouse (clickhouse) query runner.
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6103)
[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6103)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6104)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6105)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:6106)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [6106] [INFO] Worker exiting (pid: 6106)
2021-10-29 18:31:08[2021-10-29 23:31:08 +0000] [6105] [INFO] Worker exiting (pid: 6105)
I manually added two environmental variables REDASH_REDIS_URL & REDASH_DATABASE_URL. They are not working either. My EC2 can access RDS, so it's not the db problem.
ubuntu#ip-10-8-0-191:~$ psql -h p-d-mt-a-redash-rds-instance-1.c5clgmj5xaif.us-west-2.rds.amazonaws.com -p 5432 -U postgresPassword for user postgres:
psql (10.18 (Ubuntu 10.18-0ubuntu0.18.04.1), server 13.4)
WARNING: psql major version 10, server major version 13.
Some psql features might not work.
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
postgres=> \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
mytable | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
rdsadmin | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | rdsadmin=CTc/rdsadmin
template0 | rdsadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/rdsadmin +
| | | | | rdsadmin=CTc/rdsadmin
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(5 rows)
postgres=>
Does anyone know how to make the "WORKER" working? (not timed out, exit) Does it need a lot of configuration in order to make Redash launched by Fargate? BTW, I created Redis & RDS in AWS. They are all the same VPC, security group inbound configured too.
I am trying to install pcf dev on local machine which have window 10 using below link.
https://pivotal.io/platform/pcf-tutorials/getting-started-with-pivotal-cloud-foundry-dev/install-pcf-dev
During installation I am getting below error in deplo-pass.log
Task 576 | 10:31:39 | Preparing deployment: Preparing deployment
(00:01:46) Task 576 | 10:34:11 | Preparing package compilation:
Finding packages to compile (00:00:01) Task 576 | 10:34:12 | Creating
missing vms: database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) Task
576 | 10:34:12 | Creating missing vms:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) Task 576 | 10:34:12
| Creating missing vms: control/29cf5702-030d-4cac-ac9d-5e4221562e3a
(0) Task 576 | 10:34:12 | Creating missing vms:
compute/927d5018-9f8d-4b94-aa37-fee45aef2280 (0) Task 576 | 10:34:12 |
Creating missing vms: router/d3df4a57-43dd-491d-91ce-c9eda8ca88f8 (0)
Task 576 | 10:34:46 | Creating missing vms:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) (00:00:34) Task 576
| 10:34:48 | Creating missing vms:
router/d3df4a57-43dd-491d-91ce-c9eda8ca88f8 (0) (00:00:36) Task 576 |
10:34:48 | Creating missing vms:
compute/927d5018-9f8d-4b94-aa37-fee45aef2280 (0) (00:00:36) Task 576 |
10:34:49 | Creating missing vms:
database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) (00:00:37) Task 576
| 10:34:57 | Creating missing vms:
control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0) (00:00:45) Task 576 |
10:34:57 | Updating instance database:
database/2fe9e267-1bb0-4be6-8a4b-b61e534bcd64 (0) (canary) (00:06:47)
Task 576 | 10:41:44 | Updating instance blobstore:
blobstore/8a9a1df4-39fe-4232-be3e-831d318bcb93 (0) (canary) (00:01:03)
Task 576 | 10:42:47 | Updating instance control:
control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0) (canary) (01:22:36)
L Error: 'control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0)' is not running
after update. Review logs for failed jobs: routing-api,
cloud_controller_clock, credhub Task 576 | 12:05:25 | Error:
'control/29cf5702-030d-4cac-ac9d-5e4221562e3a (0)' is not running
after update. Review logs for failed jobs: routing-api,
cloud_controller_clock, credhub
how to reviews logs of failing jobs? Is any ways to see logs failed jobs: routing-api, cloud_controller_clock, credhub ?
You need to install the bosh cli first. https://bosh.io/docs/cli-v2-install/
Make sure bosh is installed:
my-mac: bosh -v
version 6.2.1-a28042ac-2020-02-10T18:41:00Z
Succeeded
Set the environment variables for bosh to connect to pcf-dev:
my-mac: cf dev bosh
Usage: eval $(cf dev bosh env)
my-mac: eval $(cf dev bosh env)
Ask bosh to show the name of your cf deployment, in this case cf-66ade9481d314315358c is the name:
my-mac: bosh deployments
Using environment '10.144.0.2' as client 'ops_manager'
Name Release(s) Stemcell(s) Team(s)
cf-66ade9481d314315358c binary-buildpack/1.0.30 bosh-warden-boshlite-ubuntu-xenial-go_agent/170.30 -
bosh-dns/1.10.0
bosh-dns-aliases/0.0.3
bpm/1.0.3
capi/1.71.4
cf-cli/1.9.0
cf-networking/2.18.2
cf-syslog-drain/8.1
cflinuxfs2/1.267.0
cflinuxfs3/0.62.0
consul/198
consul-drain/0.0.3
credhub/2.1.2
diego/2.22.1
dotnet-core-buildpack/2.2.5
garden-runc/1.18.0
go-buildpack/1.8.33
java-offline-buildpack/4.16.1
log-cache/2.0.1
loggregator/103.4
loggregator-agent/2.3
nats/26
nodejs-buildpack/1.6.43
php-buildpack/4.3.70
push-apps-manager-release/667.0.6
pxc/0.14.2
python-buildpack/1.6.28
routing/0.184.0
ruby-buildpack/1.7.31
silk/2.18.1
staticfile-buildpack/1.4.39
statsd-injector/1.5.0
uaa/66.0
1 deployments
Succeeded
Retrieve te logs with bosh using the name from the name column:
my-mac: bosh --deployment cf-66ade9481d314315358c logs
Using environment '10.144.0.2' as client 'ops_manager'
Using deployment 'cf-66ade9481d314315358c'
Task 784
Task 784 | 17:54:41 | Fetching logs for router/8828d2fd-10f6-4d1e-9a3f-ac3d6ef6b833 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for database/e3b3bb98-1e73-41cf-be32-324280615a3b (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for compute/0820e2d7-998a-405a-bbea-e73a76ec26b4 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for blobstore/5c3297a4-4ad0-43bc-8041-000ca8e38e28 (0): Finding and packing log files
Task 784 | 17:54:41 | Fetching logs for control/bf60350d-aab0-4db3-8cc2-05d666f5f3a8 (0): Finding and packing log files
Task 784 | 17:54:42 | Fetching logs for compute/0820e2d7-998a-405a-bbea-e73a76ec26b4 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for router/8828d2fd-10f6-4d1e-9a3f-ac3d6ef6b833 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for database/e3b3bb98-1e73-41cf-be32-324280615a3b (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for blobstore/5c3297a4-4ad0-43bc-8041-000ca8e38e28 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:42 | Fetching logs for control/bf60350d-aab0-4db3-8cc2-05d666f5f3a8 (0): Finding and packing log files (00:00:01)
Task 784 | 17:54:43 | Fetching group of logs: Packing log files together
Task 784 Started Sat May 9 17:54:41 UTC 2020
Task 784 Finished Sat May 9 17:54:43 UTC 2020
Task 784 Duration 00:00:02
Task 784 done
Downloading resource 'f7d8c6d3-43f8-419a-a436-53a38155af47' to '/Users/my-mac/workspace/pcf-dev/cf-66ade9481d314315358c-20200509-195443-771607.tgz'...
0.00%
Succeeded
Unpack your downloaded log archive.
I've configured SQL second gen. instance and App Engine application (Python 2.7) in one project. I've made necessary settings according to that page.
app.yaml
runtime: python27
api_version: 1
threadsafe: true
env_variables:
CLOUDSQL_CONNECTION_NAME: coral-heuristic-215610:us-central1:db-basic-1
CLOUDSQL_USER: root
CLOUDSQL_PASSWORD: xxxxxxxxx
beta_settings:
cloud_sql_instances: coral-heuristic-215610:us-central1:db-basic-1
libraries:
- name: lxml
version: latest
- name: MySQLdb
version: latest
handlers:
- url: /main
script: main.app
Now as I try to connect from the app (inside Cloud Shell), the error:
OperationalError: (2002, 'Can\'t connect to local MySQL server through socket \'/var/run/mysqld/mysqld.sock\' (2 "No such file or directory")')
Direct connection works:
$ gcloud sql connect db-basic-1 --user=root
was successful...
MySQL [correction_dict]> SHOW PROCESSLIST;
+--------+------+----------------------+-----------------+---------+------+----------+------------------+
| Id | User | Host | db | Command | Time | State | Info |
+--------+------+----------------------+-----------------+---------+------+----------+------------------+
| 9 | root | localhost | NULL | Sleep | 4 | | NULL |
| 10 | root | localhost | NULL | Sleep | 4 | | NULL |
| 112306 | root | 35.204.173.246:59210 | correction_dict | Query | 0 | starting | SHOW PROCESSLIST |
| 112357 | root | localhost | NULL | Sleep | 4 | | NULL |
| 112368 | root | localhost | NULL | Sleep | 0 | | NULL |
+--------+------+----------------------+-----------------+---------+------+----------+------------------+
I've authorized IP to connect to the Cloud SQL instance:
Any hints, help?
Google AppEngine Standard provides a unix socket at /cloudsql/[INSTANCE_CONNECTION_NAME] that automatically connects you to your CloudSQL instance. All you need to do is connect to it at that address. For the MySQLDb library, that looks like this:
db = MySQLdb.connect(
unix_socket=cloudsql_unix_socket,
user=CLOUDSQL_USER,
passwd=CLOUDSQL_PASSWORD)
(If you are running AppEngine Flexible, connecting is different and can be found here)
I'm writing an AWS CloudFormation script to build an EC2 instance. I'd like to provision the instance by installing some packages, downloading some repos and running some scripts. Amazon tells me I can do this in CloudFormation with the UserData field. However, it just doesn't seem to work at all.
Here is what i'm working with currently:
DWHServer:
Type: "AWS::EC2::Instance"
Properties:
DisableApiTermination: false # no termination protection
EbsOptimized: false # optimize for elastic block store
IamInstanceProfile: !Ref DWHServerIAMIP
ImageId: "ami-5189a661" # ubunty-trusty-14.04-amd64-server-20150325
InstanceInitiatedShutdownBehavior: "terminate"
InstanceType: "t2.medium"
KeyName: !FindInMap [EnvMap, KeyPair, !Ref EnvType]
Monitoring: true
SecurityGroupIds:
- !Ref DWHServerSG
SourceDestCheck: true # ??
SubnetId: "subnet-aed2ecf6" # Stage-etl-2c
UserData: !Base64
"Fn::Join": ["", ["#!/bin/bash -xe\n", "touch ~/confirm_work.txt\n"]]
This is the most simple example. I just want it to make a file to prove that it's running. But it doesn't even do that. The docs say to look at something called /var/log/cloud-init-output.log. I looked there, but don't see anything about UserData. There does seem to be some sort of network error, but I'm not sure how to interpret it or what to do about it.
Here are the contents of the cloud-init-output.log file on the instance:
Cloud-init v. 0.7.5 running 'init-local' at Sat, 04 Mar 2017 02:40:07 +0000. Up 3.85 seconds.
Cloud-init v. 0.7.5 running 'init' at Sat, 04 Mar 2017 02:40:09 +0000. Up 6.01 seconds.
ci-info: +++++++++++++++++++++++++Net device info+++++++++++++++++++++++++
ci-info: +--------+------+-----------+---------------+-------------------+
ci-info: | Device | Up | Address | Mask | Hw-Address |
ci-info: +--------+------+-----------+---------------+-------------------+
ci-info: | lo | True | 127.0.0.1 | 255.0.0.0 | . |
ci-info: | eth0 | True | 10.0.7.84 | 255.255.255.0 | 0a:3a:b0:a4:96:5d |
ci-info: +--------+------+-----------+---------------+-------------------+
ci-info: ++++++++++++++++++++++++++++++Route info++++++++++++++++++++++++++++++
ci-info: +-------+-------------+----------+---------------+-----------+-------+
ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags |
ci-info: +-------+-------------+----------+---------------+-----------+-------+
ci-info: | 0 | 0.0.0.0 | 10.0.7.1 | 0.0.0.0 | eth0 | UG |
ci-info: | 1 | 10.0.7.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U |
ci-info: +-------+-------------+----------+---------------+-----------+-------+
Mar 4 02:40:11 ubuntu pollinate[723]: ERROR: Network communication failed [60]\n02:40:10.394529 * Hostname was NOT found in DNS cache
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 002:40:10.407240 * Trying 91.189.94.24...
02:40:10.550022 * Connected to entropy.ubuntu.com (91.189.94.24) port 443 (#0)
02:40:10.551661 * successfully set certificate verify locations:
02:40:10.551698 * CAfile: /etc/pollinate/entropy.ubuntu.com.pem
CApath: /dev/null
02:40:10.551804 * SSLv3, TLS handshake, Client hello (1):
02:40:10.551832 } [data not shown]
02:40:10.711080 * SSLv3, TLS handshake, Server hello (2):
02:40:10.711129 { [data not shown]
02:40:10.711191 * SSLv3, TLS handshake, CERT (11):
02:40:10.711216 { [data not shown]
02:40:10.711490 * SSLv3, TLS alert, Server hello (2):
02:40:10.711520 } [data not shown]
02:40:10.711602 * SSL certificate problem: unable to get local issuer certificate
^M 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
02:40:10.711732 * Closing connection 0
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
2017-03-04 02:40:11,144 - util.py[WARNING]: Running seed_random (<module 'cloudinit.config.cc_seed_random' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_seed_random.pyc'>) failed
Generating public/private rsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_rsa_key.
Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub.
The key fingerprint is:
0c:54:09:ab:bc:b8:63:b5:6c:d2:d5:47:21:4a:38:6f root#ip-10-0-7-84
The key's randomart image is:
+--[ RSA 2048]----+
| .oo.. |
| o...o . |
| +o. . . |
| . .Eo . |
| o. .S. |
| .... . . |
| .+.o . |
| +.= |
| ..+ |
+-----------------+
Generating public/private dsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_dsa_key.
Your public key has been saved in /etc/ssh/ssh_host_dsa_key.pub.
The key fingerprint is:
89:26:94:17:79:6d:45:15:fc:5f:37:95:31:2e:e9:f7 root#ip-10-0-7-84
The key's randomart image is:
+--[ DSA 1024]----+
| .. . oooo+o|
| .... o +.o|
| o .. . o o.|
| . . . . . ..+|
| . o S . .=|
| o . o|
| E|
| |
| |
+-----------------+
Generating public/private ecdsa key pair.
Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key.
Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub.
The key fingerprint is:
af:a2:c7:b3:95:5c:17:2e:ce:69:b3:f6:39:c7:67:91 root#ip-10-0-7-84
The key's randomart image is:
+--[ECDSA 256]---+
| |
| |
| . |
| . . |
| S o o .|
| . * + E |
| . + B . .|
| =. o.o..o o|
| .o.+....oo o |
+-----------------+
Cloud-init v. 0.7.5 running 'modules:config' at Sat, 04 Mar 2017 02:40:14 +0000. Up 11.53 seconds.
Generating locales... en_US.UTF-8... up-to-date
Generation complete.
Cloud-init v. 0.7.5 running 'modules:final' at Sat, 04 Mar 2017 02:40:17 +0000. Up 13.61 seconds.
+ touch /root/confirm_work.txt
Cloud-init v. 0.7.5 finished at Sat, 04 Mar 2017 02:40:17 +0000. Datasource DataSourceEc2. Up 13.83 seconds
Any tips would be greatly appreciated. Thanks!
Look at the second to last entry in the log:
+ touch /root/confirm_work.txt
The command is indeed invoked. Note that all commands in your EC2 user data will show up in that log file (/var/log/cloud-init-output.log) with a PLUS sign prepended to it (like above). Is it possible that the touch command is not there? That would be surprising. But add an "echo" command before the touch, you should see the output and that would confirm that it's all working fine. Maybe you're trying to touch in a directory you don't have access. Maybe try to touch a file in /tmp to narrow things down...
Protip: Always use fully qualified paths in scripts. Try this for your userdata. Does it help?
UserData: !Base64
"Fn::Join": ["\n", ["#!/bin/bash -xe", "/bin/touch /tmp/confirm_work.txt"]]
I am trying to upload around 10 GB file from my local machine to S3 (inside a camel route). Although file gets uploaded in around 3-4 minutes, but it also throwing following exception:
2014-06-26 13:53:33,417 | INFO | ads.com/outbound | FetchRoute | 167 - com.ut.ias - 2.0.3 | Download complete to local. Pushing file to S3
2014-06-26 13:54:19,465 | INFO | manager-worker-6 | AmazonHttpClient | 144 - org.apache.servicemix.bundles.aws-java-sdk - 1.5.1.1 | Unable to execute HTTP request: The target server failed to respond
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:95)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:62)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)[141:org.apache.httpcomponents.httpcore:4.2.4]
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)[141:org.apache.httpcomponents.httpcore:4.2.4]
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)[142:org.apache.httpcomponents.httpclient:4.2.5]
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)[141:org.apache.httpcomponents.httpcore:4.2.4]
.......
at java.util.concurrent.FutureTask.run(FutureTask.java:262)[:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_55]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_55]
at java.lang.Thread.run(Thread.java:744)[:1.7.0_55]
2014-06-26 13:55:08,991 | INFO | ads.com/outbound | FetchRoute | 167 - com.ut.ias - 2.0.3 | Upload complete.
Due to which camel route doesn't stop and it is continuously throwing InterruptedException:
2014-06-26 13:55:11,182 | INFO | ads.com/outbound | SftpOperations | 110 - org.apache.camel.camel-ftp - 2.12.1 | JSCH -> Disconnecting from cxportal.integralads.com port 22
2014-06-26 13:55:11,183 | INFO | lads.com session | SftpOperations | 110 - org.apache.camel.camel-ftp - 2.12.1 | JSCH -> Caught an exception, leaving main loop due to Socket closed
2014-06-26 13:55:11,183 | WARN | lads.com session | eventadmin | 139 - org.apache.felix.eventadmin - 1.3.2 | EventAdmin: Exception: java.lang.InterruptedException
java.lang.InterruptedException
at EDU.oswego.cs.dl.util.concurrent.LinkedQueue.offer(Unknown Source)[139:org.apache.felix.eventadmin:1.3.2]
at EDU.oswego.cs.dl.util.concurrent.PooledExecutor.execute(Unknown Source)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.tasks.DefaultThreadPool.executeTask(DefaultThreadPool.java:101)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.tasks.AsyncDeliverTasks.execute(AsyncDeliverTasks.java:105)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.handler.EventAdminImpl.postEvent(EventAdminImpl.java:100)[139:org.apache.felix.eventadmin:1.3.2]
at org.apache.felix.eventadmin.impl.adapter.LogEventAdapter$1.logged(LogEventAdapter.java:281)[139:org.apache.felix.eventadmin:1.3.2]
at org.ops4j.pax.logging.service.internal.LogReaderServiceImpl.fire(LogReaderServiceImpl.java:134)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.LogReaderServiceImpl.fireEvent(LogReaderServiceImpl.java:126)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.PaxLoggingServiceImpl.handleEvents(PaxLoggingServiceImpl.java:180)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.service.internal.PaxLoggerImpl.inform(PaxLoggerImpl.java:145)[50:org.ops4j.pax.logging.pax-logging-service:1.7.1]
at org.ops4j.pax.logging.internal.TrackingLogger.inform(TrackingLogger.java:86)[18:org.ops4j.pax.logging.pax-logging-api:1.7.1]
at org.ops4j.pax.logging.slf4j.Slf4jLogger.info(Slf4jLogger.java:476)[18:org.ops4j.pax.logging.pax-logging-api:1.7.1]
at org.apache.camel.component.file.remote.SftpOperations$JSchLogger.log(SftpOperations.java:359)[110:org.apache.camel.camel-ftp:2.12.1]
at com.jcraft.jsch.Session.run(Session.java:1621)[109:org.apache.servicemix.bundles.jsch:0.1.49.1]
at java.lang.Thread.run(Thread.java:744)[:1.7.0_55]
Please see my code below and let me know, where I am going wrong:
TransferManager tm = new TransferManager(
S3Client.getS3Client());
// TransferManager processes all transfers asynchronously,
// so this call will return immediately.
Upload upload = tm.upload(
Utils.getProperty(Constants.BUCKET),
getS3Key(file.getName()), file);
try {
upload.waitForCompletion();
logger.info("Upload complete.");
} catch (AmazonClientException amazonClientException) {
logger.warn("Unable to upload file, upload was aborted.");
amazonClientException.printStackTrace();
}
The stacktrace doesn't even have any reference to my code, hence couldn't determine where the issue is.
Any help or pointer would be really appreciated.
Thanks